Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
ERIC Educational Resources Information Center
Arnold, Robert
Problems in educational cost accounting and a new cost accounting approach are described in this paper. The limitations of the individualized cost (student units) approach and the comparative cost approach (in the form of fund-function-object) are illustrated. A new strategy, an activity-based system of accounting, is advocated. Borrowed from…
Practice expenses in the MFS (Medicare fee schedule): the service-class approach.
Latimer, E A; Kane, N M
1995-01-01
The practice expense component of the Medicare fee schedule (MFS), which is currently based on historical charges and rewards physician procedures at the expense of cognitive services, is due to be changed by January 1, 1998. The Physician Payment Review Commission (PPRC) and others have proposed microcosting direct costs and allocating all indirect costs on a common basis, such as physician time or work plus direct costs. Without altering the treatment of direct costs, the service-class approach disaggregates indirect costs into six practice function costs. The practice function costs are then allocated to classes of services using cost-accounting and statistical methods. This approach would make the practice expense component more resource-based than other proposed alternatives.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O
2009-04-01
This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.
Hubig, Michael; Suchandt, Steffen; Adam, Nico
2004-10-01
Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.
Ecosystem Services in Conservation Planning: Targeted Benefits vs. Co-Benefits or Costs?
Chan, Kai M. A.; Hoshizaki, Lara; Klinkenberg, Brian
2011-01-01
There is growing support for characterizing ecosystem services in order to link conservation and human well-being. However, few studies have explicitly included ecosystem services within systematic conservation planning, and those that have follow two fundamentally different approaches: ecosystem services as intrinsically-important targeted benefits vs. substitutable co-benefits. We present a first comparison of these two approaches in a case study in the Central Interior of British Columbia. We calculated and mapped economic values for carbon storage, timber production, and recreational angling using a geographical information system (GIS). These ‘marginal’ values represent the difference in service-provision between conservation and managed forestry as land uses. We compared two approaches to including ecosystem services in the site-selection software Marxan: as Targeted Benefits, and as Co-Benefits/Costs (in Marxan's cost function); we also compared these approaches with a Hybrid approach (carbon and angling as targeted benefits, timber as an opportunity cost). For this analysis, the Co-Benefit/Cost approach yielded a less costly reserve network than the Hybrid approach (1.6% cheaper). Including timber harvest as an opportunity cost in the cost function resulted in a reserve network that achieved targets equivalently, but at 15% lower total cost. We found counter-intuitive results for conservation: conservation-compatible services (carbon, angling) were positively correlated with each other and biodiversity, whereas the conservation-incompatible service (timber) was negatively correlated with all other networks. Our findings suggest that including ecosystem services within a conservation plan may be most cost-effective when they are represented as substitutable co-benefits/costs, rather than as targeted benefits. By explicitly valuing the costs and benefits associated with services, we may be able to achieve meaningful biodiversity conservation at lower cost and with greater co-benefits. PMID:21915318
New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water
NASA Astrophysics Data System (ADS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Bull, Michael A.; Seidel, Felix C.
2018-01-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, best estimate
AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥ 0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
New Approach to the Retrieval of AOD and its Uncertainty from MISR Observations Over Dark Water
NASA Astrophysics Data System (ADS)
Witek, M. L.; Garay, M. J.; Diner, D. J.; Bull, M. A.; Seidel, F.
2017-12-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous Version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture, then used a combination of these values to compute the final, "best estimate" AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of a) the absolute values of the cost functions for each aerosol mixture, b) the widths of the cost function distributions as a function of AOD, and c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on arbitrary thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new Aerosol Retrieval Confidence Index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI≥0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
Application of target costing in machining
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.
2004-11-01
In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.
Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.
2017-11-01
The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.
An economic analysis of a commercial approach to the design and fabrication of a space power system
NASA Technical Reports Server (NTRS)
Putney, Z.; Been, J. F.
1979-01-01
A commercial approach to the design and fabrication of an economical space power system is presented. Cost reductions are projected through the conceptual design of a 2 kW space power system built with the capability for having serviceability. The approach to system costing that is used takes into account both the constraints of operation in space and commercial production engineering approaches. The cost of this power system reflects a variety of cost/benefit tradeoffs that would reduce system cost as a function of system reliability requirements, complexity, and the impact of rigid specifications. A breakdown of the system design, documentation, fabrication, and reliability and quality assurance cost estimates are detailed.
Consideration of plant behaviour in optimal servo-compensator design
NASA Astrophysics Data System (ADS)
Moase, W. H.; Manzie, C.
2016-07-01
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
NASA Astrophysics Data System (ADS)
Qi, Wei
2017-11-01
Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.
Using a value chain approach for effective decision making.
Wilner, N A
1997-09-01
Effectively managing costs in a healthcare environment may require taking a new look at how those costs are evaluated. The price of a product is not necessarily the most effective or efficient way of determining the actual cost. Using a value chain approach takes into consideration the functional costs of using a product as well, including both the "process" and "downstream" costs to an organization. In this article, Associate Professor Neil A. Wilner examines the differences between price and cost using a typical purchase in a healthcare environment.
ERIC Educational Resources Information Center
Estes, Gary D.
The paper focuses on the Title I Evaluation Technical Assistance Centers to illustrate issues of measuring costs and deciding on outcome criteria before promoting "cost-effective" approaches. Effects are illustrated for varying resource allocations among personnel, travel, materials, and phone costs as a function of emphasizing…
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Reliability and Validity in Hospital Case-Mix Measurement
Pettengill, Julian; Vertrees, James
1982-01-01
There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Component Cost Reduction by Value Engineering: A Case Study
NASA Astrophysics Data System (ADS)
Kalluri, Vinayak; Kodali, Rambabu
2017-04-01
The concept value engineering (VE) acts to increase the value of a product through the improvement in existent functions without increasing their costs. In other words, VE is a function oriented, systematic team approach study to provide value in a product, system or service. The authors systematically explore VE through the six step framework proposed by SAVE and a case study is presented to address the concern of reduction in cost without compromising the function of a hydraulic steering cylinder through the aforementioned VE framework.
Bhat; Bergstrom; Teasley; Bowker; Cordell
1998-01-01
/ This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model
NASA Technical Reports Server (NTRS)
Lin, N. J.; Quinn, R. D.
1991-01-01
A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.
Dense image registration through MRFs and efficient linear programming.
Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos
2008-12-01
In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.
Elbasha, Elamin H
2005-05-01
The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd
NASA Astrophysics Data System (ADS)
Jolivet, L.; Cohen, M.; Ruas, A.
2015-08-01
Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.
A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication
Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo
2017-01-01
Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less
NASA Technical Reports Server (NTRS)
1974-01-01
A management approach for the Earth Observatory Satellite (EOS) which will meet the challenge of a constrained cost environment is presented. Areas of consideration are contracting techniques, test philosophy, reliability and quality assurance requirements, commonality options, and documentation and control requirements. The various functional areas which were examined for cost reduction possibilities are identified. The recommended management approach is developed to show the primary and alternative methods.
A cost-function approach to rival penalized competitive learning (RPCL).
Ma, Jinwen; Wang, Taijun
2006-08-01
Rival penalized competitive learning (RPCL) has been shown to be a useful tool for clustering on a set of sample data in which the number of clusters is unknown. However, the RPCL algorithm was proposed heuristically and is still in lack of a mathematical theory to describe its convergence behavior. In order to solve the convergence problem, we investigate it via a cost-function approach. By theoretical analysis, we prove that a general form of RPCL, called distance-sensitive RPCL (DSRPCL), is associated with the minimization of a cost function on the weight vectors of a competitive learning network. As a DSRPCL process decreases the cost to a local minimum, a number of weight vectors eventually fall into a hypersphere surrounding the sample data, while the other weight vectors diverge to infinity. Moreover, it is shown by the theoretical analysis and simulation experiments that if the cost reduces into the global minimum, a correct number of weight vectors is automatically selected and located around the centers of the actual clusters, respectively. Finally, we apply the DSRPCL algorithms to unsupervised color image segmentation and classification of the wine data.
Sendi, Pedram
2008-06-01
When choosing from a menu of treatment alternatives, the optimal treatment depends on the objective function and the assumptions of the model. The classical decision rule of cost-effectiveness analysis may be formulated via two different objective functions: (i) maximising health outcomes subject to the budget constraint or (ii) maximising the net benefit of the intervention with the budget being determined ex post. We suggest a more general objective function of (iii) maximising return on investment from available resources with consideration of health and non-health investments. The return on investment approach allows to adjust the analysis for the benefits forgone by alternative non-health investments from a societal or subsocietal perspective. We show that in the presence of positive returns on non-health investments the decision-maker's willingness to pay per unit of effect for a treatment program needs to be higher than its incremental cost-effectiveness ratio to be considered cost-effective.
NASA Astrophysics Data System (ADS)
Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang
This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.
Local Systems: Design and Costs.
ERIC Educational Resources Information Center
Gozzi, Cynthia I.
1980-01-01
Suggests that a less rigid traditional approach towards automating acquisitions functions might be more cost effective. Thorough investigation of available alternatives should precede a decision to adopt or maintain a local system. (Author/RAA)
Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F
2017-05-01
This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.
Cost and Effectiveness of an Educational Program for Autistic Children Using a Systems Approach.
ERIC Educational Resources Information Center
Hung, David W.; And Others
1983-01-01
A systems approach, which features behavioral assessments, a functional curriculum, behavior management, precision teaching, systematic use of reinforcement, and a structured teaching schedule, resulted in greater learning of functional skills and increased structured teaching time per day compared to two control treaments for 12 autistic…
Technologies, processes, business approaches, and policies to drive clean energy technology cost reductions and Muhammad Alam. 2017. An analysis of the Cost and Performance of Photovoltaic Systems as a Function Wiley & Sons, Ltd. doi:10.1002/pip.2755 Horowitz, Kelsey A.W. and Michael Woodhouse. "Cost and
Local Minima Free Parameterized Appearance Models
Nguyen, Minh Hoai; De la Torre, Fernando
2010-01-01
Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750
Life cycle costing of food waste: A review of methodological approaches.
De Menna, Fabio; Dietershagen, Jana; Loubiere, Marion; Vittuari, Matteo
2018-03-01
Food waste (FW) is a global problem that is receiving increasing attention due to its environmental and economic impacts. Appropriate FW prevention, valorization, and management routes could mitigate or avoid these effects. Life cycle thinking and approaches, such as life cycle costing (LCC), may represent suitable tools to assess the sustainability of these routes. This study analyzes different LCC methodological aspects and approaches to evaluate FW management and valorization routes. A systematic literature review was carried out with a focus on different LCC approaches, their application to food, FW, and waste systems, as well as on specific methodological aspects. The review consisted of three phases: a collection phase, an iterative phase with experts' consultation, and a final literature classification. Journal papers and reports were retrieved from selected databases and search engines. The standardization of LCC methodologies is still in its infancy due to a lack of consensus over definitions and approaches. Research on the life cycle cost of FW is limited and generally focused on FW management, rather than prevention or valorization of specific flows. FW prevention, valorization, and management require a consistent integration of LCC and Life Cycle Assessment (LCA) to avoid tradeoffs between environmental and economic impacts. This entails a proper investigation of methodological differences between attributional and consequential modelling in LCC, especially with regard to functional unit, system boundaries, multi-functionality, included cost, and assessed impacts. Further efforts could also aim at finding the most effective and transparent categorization of costs, in particular when dealing with multiple stakeholders sustaining costs of FW. Interpretation of results from LCC of FW should take into account the effect on larger economic systems. Additional key performance indicators and analytical tools could be included in consequential approaches. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Marsalek, Ondrej; Markland, Thomas E
2016-02-07
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.
Algorithm For Optimal Control Of Large Structures
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Garba, John A..; Utku, Senol
1989-01-01
Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.
Low cost Ku-band earth terminals for voice/data/facsimile
NASA Technical Reports Server (NTRS)
Kelley, R. L.
1977-01-01
A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.
ERIC Educational Resources Information Center
Baker, Bruce D.; Green, Preston C., III
2009-01-01
The goal of this study is to apply a conventional education cost-function approach for estimating the sensitivity of cost models and predicted education costs to the inclusion of school district level racial composition variables and further to test whether race neutral alternatives sufficiently capture the additional costs associated with school…
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Airfoil optimization by the one-shot method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1994-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul
2008-01-01
An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Dual-Use Aspects of System Health Management
NASA Technical Reports Server (NTRS)
Owens, P. R.; Jambor, B. J.; Eger, G. W.; Clark, W. A.
1994-01-01
System Health Management functionality is an essential part of any space launch system. Health management functionality is an integral part of mission reliability, since it is needed to verify the reliability before the mission starts. Health Management is also a key factor in life cycle cost reduction and in increasing system availability. The degree of coverage needed by the system and the degree of coverage made available at a reasonable cost are critical parameters of a successful design. These problems are not unique to the launch vehicle world. In particular, the Intelligent Vehicle Highway System, commercial aircraft systems, train systems, and many types of industrial production facilities require various degrees of system health management. In all of these applications, too, the designers must balance the benefits and costs of health management in order to optimize costs. The importance of an integrated system is emphasized. That is, we present the case for considering health management as an integral part of system design, rather than functionality to be added on at the end of the design process. The importance of maintaining the system viewpoint is discussed in making hardware and software tradeoffs and in arriving at design decisions. We describe an approach to determine the parameters to be monitored in any system health management application. This approach is based on Design of Experiments (DOE), prototyping, failure modes and effects analyses, cost modeling and discrete event simulation. The various computer-based tools that facilitate the approach are discussed. The approach described originally was used to develop a fault tolerant avionics architecture for launch vehicles that incorporated health management as an integral part of the system. Finally, we discuss generalizing the technique to apply it to other domains. Several illustrations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Kelsey A; Ding, Fei; Mather, Barry A
The increasing deployment of distributed photovoltaic systems (DPV) can impact operations at the distribution level and the transmission level of the electric grid. It is important to develop and implement forward-looking approaches for calculating distribution upgrade costs that can be used to inform system planning, market and tariff design, cost allocation, and other policymaking as penetration levels of DPV increase. Using a bottom-up approach that involves iterative hosting capacity analysis, this report calculates distribution upgrade costs as a function of DPV penetration on three real feeders - two in California and one in the Northeastern United States.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
Geng, Chao; Luo, Wen; Tan, Yi; Liu, Hongmei; Mu, Jinbo; Li, Xinyang
2013-10-21
A novel approach of tip/tilt control by using divergence cost function in stochastic parallel gradient descent (SPGD) algorithm for coherent beam combining (CBC) is proposed and demonstrated experimentally in a seven-channel 2-W fiber amplifier array with both phase-locking and tip/tilt control, for the first time to our best knowledge. Compared with the conventional power-in-the-bucket (PIB) cost function for SPGD optimization, the tip/tilt control using divergence cost function ensures wider correction range, automatic switching control of program, and freedom of camera's intensity-saturation. Homemade piezoelectric-ring phase-modulator (PZT PM) and adaptive fiber-optics collimator (AFOC) are developed to correct piston- and tip/tilt-type aberrations, respectively. The PIB cost function is employed for phase-locking via maximization of SPGD optimization, while the divergence cost function is used for tip/tilt control via minimization. An average of 432-μrad of divergence metrics in open loop has decreased to 89-μrad when tip/tilt control implemented. In CBC, the power in the full width at half maximum (FWHM) of the main lobe increases by 32 times, and the phase residual error is less than λ/15.
Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor
NASA Technical Reports Server (NTRS)
Szu, Harold H.
1990-01-01
In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less
A real-space approach to the X-ray phase problem
NASA Astrophysics Data System (ADS)
Liu, Xiangan
Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
CAD of control systems: Application of nonlinear programming to a linear quadratic formulation
NASA Technical Reports Server (NTRS)
Fleming, P.
1983-01-01
The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jackson, Mark Charles
1994-01-01
Spacecraft proximity operations are complicated by the fact that exhaust plume impingement from the reaction control jets of space vehicles can cause structural damage, contamination of sensitive arrays and instruments, or attitude misalignment during docking. The occurrence and effect of jet plume impingement can be reduced by planning approach trajectories with plume effects considered. An A* node search is used to find plume-fuel optimal trajectories through a discretized six dimensional attitude-translation space. A plume cost function which approximates jet plume isopressure envelopes is presented. The function is then applied to find relative costs for predictable 'trajectory altering' firings and unpredictable 'deadbanding' firings. Trajectory altering firings are calculated by running the spacecraft jet selection algorithm and summing the cost contribution from each jet fired. A 'deadbanding effects' function is defined and integrated to determine the potential for deadbanding impingement along candidate trajectories. Plume costs are weighed against fuel costs in finding the optimal solution. A* convergence speed is improved by solving approach trajectory problems in reverse time. Results are obtained on a high fidelity space shuttle/space station simulation. Trajectory following is accomplished by a six degree of freedom autopilot. Trajectories planned with, and without, plume costs are compared in terms of force applied to the target structure.
Initial Approaches for Discovery of Undocumented Functionality in FPGAs
2017-03-01
commercial pressures such as IP protection, support cost, and time to market , modern COTS devices contain many functions that are not exposed to the... market pressures have increased, industry increasingly uses the current generation device to do trial runs of next-generation architecture features...the product of industry operating in a highly cost competitive market , and are not inserted with malicious intent, however, this does not preclude
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Zapata, Edgar; Levack, Daniel J. H.; Robinson, John W.; Donahue, Benjamin B.
2009-01-01
Cost control must be implemented through the establishment of requirements and controlled continually by managing to these requirements. Cost control of the non-recurring side of life cycle cost has traditionally been implemented in both commercial and government programs. The government uses the budget process to implement this control. The commercial approach is to use a similar process of allocating the non-recurring cost to major elements of the program. This type of control generally manages through a work breakdown structure (WBS) by defining the major elements of the program. If the cost control is to be applied across the entire program life cycle cost (LCC), the approach must be addressed very differently. A functional breakdown structure (FBS) is defined and recommended. Use of a FBS provides the visibifity to allow the choice of an integrated solution reducing the cost of providing many different elements of like function. The different functional solutions that drive the hardware logistics, quantity of documentation, operational labor, reliability and maintainability balance, and total integration of the entire system from DDT&E through the life of the program must be fully defined, compared, and final decisions made among these competing solutions. The major drivers of recurring cost have been identified and are presented and discussed. The LCC requirements must be established and flowed down to provide control of LCC. This LCC control will require a structured rigid process similar to the one traditionally used to control weight/performance for space transportation systems throughout the entire program. It has been demonstrated over the last 30 years that without a firm requirement and methodically structured cost control, it is unlikely that affordable and sustainable space transportation system LCC will be achieved.
Population dynamics and mutualism: Functional responses of benefits and costs
Holland, J. Nathaniel; DeAngelis, Donald L.; Bronstein, Judith L.
2002-01-01
We develop an approach for studying population dynamics resulting from mutualism by employing functional responses based on density‐dependent benefits and costs. These functional responses express how the population growth rate of a mutualist is modified by the density of its partner. We present several possible dependencies of gross benefits and costs, and hence net effects, to a mutualist as functions of the density of its partner. Net effects to mutualists are likely a monotonically saturating or unimodal function of the density of their partner. We show that fundamental differences in the growth, limitation, and dynamics of a population can occur when net effects to that population change linearly, unimodally, or in a saturating fashion. We use the mutualism between senita cactus and its pollinating seed‐eating moth as an example to show the influence of different benefit and cost functional responses on population dynamics and stability of mutualisms. We investigated two mechanisms that may alter this mutualism's functional responses: distribution of eggs among flowers and fruit abortion. Differences in how benefits and costs vary with density can alter the stability of this mutualism. In particular, fruit abortion may allow for a stable equilibrium where none could otherwise exist.
Simulation analysis of a microcomputer-based, low-cost Omega navigation system
NASA Technical Reports Server (NTRS)
Lilley, R. W.; Salter, R. J., Jr.
1976-01-01
The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.
S.E. Maco; E.G. McPherson
2003-01-01
This study demonstrates an approach to quantify the structure, benefits, and costs of street tree populations in resource-limited communities without tree inventories. Using the city of Davis, California, U.S., as a model, existing data on the benefits and costs of municipal trees were applied to the results of a sample inventory of the cityâs public and private street...
A distributed planning concept for Space Station payload operations
NASA Technical Reports Server (NTRS)
Hagopian, Jeff; Maxwell, Theresa; Reed, Tracey
1994-01-01
The complex and diverse nature of the payload operations to be performed on the Space Station requires a robust and flexible planning approach. The planning approach for Space Station payload operations must support the phased development of the Space Station, as well as the geographically distributed users of the Space Station. To date, the planning approach for manned operations in space has been one of centralized planning to the n-th degree of detail. This approach, while valid for short duration flights, incurs high operations costs and is not conducive to long duration Space Station operations. The Space Station payload operations planning concept must reduce operations costs, accommodate phased station development, support distributed users, and provide flexibility. One way to meet these objectives is to distribute the planning functions across a hierarchy of payload planning organizations based on their particular needs and expertise. This paper presents a planning concept which satisfies all phases of the development of the Space Station (manned Shuttle flights, unmanned Station operations, and permanent manned operations), and the migration from centralized to distributed planning functions. Identified in this paper are the payload planning functions which can be distributed and the process by which these functions are performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m
2010-04-15
This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less
Suppression cost forecasts in advance of wildfire seasons
Jeffrey P. Prestemon; Karen Abt; Krista Gebert
2008-01-01
Approaches for forecasting wildfire suppression costs in advance of a wildfire season are demonstrated for two lead times: fall and spring of the current fiscal year (Oct. 1âSept. 30). Model functional forms are derived from aggregate expressions of a least cost plus net value change model. Empirical estimates of these models are used to generate advance-of-season...
Wavelength routing beyond the standard graph coloring approach
NASA Astrophysics Data System (ADS)
Blankenhorn, Thomas
2004-04-01
When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.
The specification of a hospital cost function. A comment on the recent literature.
Breyer, F
1987-06-01
In the empirical estimation of hospital cost functions, two radically different types of specifications have been chosen to date, ad-hoc forms and flexible functional forms based on neoclassical production theory. This paper discusses the respective strengths and weaknesses of both approaches and emphasizes the apparently unreconcilable conflict between the goals of maintaining functional flexibility and keeping the number of variables manageable if at the same time patient heterogeneity is to be adequately reflected in the case mix variables. A new specification is proposed which strikes a compromise between these goals, and the underlying assumptions are discussed critically.
A flexible model for the mean and variance functions, with application to medical cost data.
Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T
2013-10-30
Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.
[Determination of cost-effective strategies in colorectal cancer screening].
Dervaux, B; Eeckhoudt, L; Lebrun, T; Sailly, J C
1992-01-01
The object of the article is to implement particular methodologies in order to determine which strategies are cost-effective in the mass screening of colorectal cancer after a positive Hemoccult test. The first approach to be presented consists in proposing a method which enables all the admissible diagnostic strategies to be determined. The second approach enables a minimal cost function to be estimated using an adaptation of "Data Envelopment Analysis". This method proves to be particularly successful in cost-efficiency analysis, when the performance indicators are numerous and hard to aggregate. The results show that there are two cost-effective strategies after a positive Hemoccult test: coloscopy and sigmoidoscopy; they put into question the relevance of double contrast barium enema in the diagnosis of colo-rectal lesions.
The Vibrational Frequencies of CaO2, ScO2, and TiO2: A Comparison of Theoretical Methods
NASA Technical Reports Server (NTRS)
Rosi, Marzio; Bauschlicher, Charles W., Jr.; Chertihin, George V.; Andrews, Lester; Arnold, James O. (Technical Monitor)
1997-01-01
The vibrational frequencies of several states of CaO2, ScO2, and TiO2 are computed at using density functional theory (DFT), the Hatree-Fock approach, second order Moller-Plesset perturbation theory (MP2), and the complete-active-space self-consistent-field theory. Three different functionals are used in the DFT calculations, including two hybrid functionals. The coupled cluster singles and doubles approach including the effect of unlinked triples, determined using perturbation theory, is applied to selected states. The Becke-Perdew 86 functional appears to be the cost effective method of choice, although even this functional does not perform well for one state of CaO2. The MP2 approach is significantly inferior to the DFT approaches.
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico
NASA Astrophysics Data System (ADS)
Rodriguez, Miguel A.
The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.
Principles of operating room organization.
Watkins, W D
1997-01-01
The importance of the changing health care climate has triggered important changes in the management of high-cost components of acute care facilities. By integrating and better managing various elements of the surgical process, health care institutions are able to rationally trim costs while maintaining high-quality services. The leadership that physicians can provide is crucial to the success of this undertaking (1). The importance of the use of primary data related to patient throughput and related resources should be strongly emphasized, for only when such data are converted to INFORMATION of functional value can participating healthcare personnel be reasonably expected to anticipate and respond to varying clinical demands with ever-limited resources. Despite the claims of specific commercial vendors, no single product will likely be sufficient to significantly change the perioperative process to the degree or for the duration demanded by healthcare reform. The most effective approach to achieving safety, cost-effectiveness, and predictable process in the realm of Surgical Services will occur by appropriate application of the "best of breed" contributions of: (a) medical/patient safety practice/oversight; (b) information technology; (c) contemporary management; and (d) innovative and functional cost-accounting methodology. S "modified activity-based cost accounting method" can serve as the basis for acquiring true direct-cost information related to the perioperative process. The proposed overall management strategy emphasizes process and feedback, rather than specific product, and although imposing initial demands and change on the traditional hospital setting, can advance the strongest competitive position in perioperative services. This comprehensive approach comprises a functional basis for important bench-marking activities among multiple surgical services. An active, comparative process of this type is of paramount importance in emphasizing patient care and safety as the highest priority while changing the process and cost of perioperative care. Additionally, this approach objectively defines the surgical process in terms by which the impact of new treatments, drugs, devices and process changes can be assessed rationally.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Long-range, low-cost electric vehicles enabled by robust energy storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ping; Ross, Russel; Newman, Aron
2015-09-18
ABSTRACT A variety of inherently robust energy storage technologies hold the promise to increase the range and decrease the cost of electric vehicles (EVs). These technologies help diversify approaches to EV energy storage, complementing current focus on high specific energy lithium-ion batteries. The need for emission-free transportation and a decrease in reliance on imported oil has prompted the development of EVs. To reach mass adoption, a significant reduction in cost and an increase in range are needed. Using the cost per mile of range as the metric, we analyzed the various factors that contribute to the cost and weight ofmore » EV energy storage systems. Our analysis points to two primary approaches for minimizing cost. The first approach, of developing redox couples that offer higher specific energy than state-of-the-art lithium-ion batteries, dominates current research effort, and its challenges and potentials are briefly discussed. The second approach represents a new insight into the EV research landscape. Chemistries and architectures that are inherently more robust reduce the need for system protection and enables opportunities of using energy storage systems to simultaneously serve vehicle structural functions. This approach thus enables the use of low cost, lower specific energy chemistries without increasing vehicle weight. Examples of such systems include aqueous batteries, flow cells, and all solid-state batteries. Research progress in these technical areas is briefly reviewed. Potential research directions that can enable low-cost EVs using multifunctional energy storage technologies are described.« less
Implementation of a VLSI Level Zero Processing system utilizing the functional component approach
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.
1991-01-01
A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.
New approach in the evaluation of a fitness program at a worksite.
Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T
1999-03-01
The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.
Climate Intervention as an Optimization Problem
NASA Astrophysics Data System (ADS)
Caldeira, Ken; Ban-Weiss, George A.
2010-05-01
Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David g.; Ashish, Naveen
2005-01-01
This paper describes an approach to achieving data integration across multiple sources in an enterprise, in a manner that is cost efficient and economically scalable. We present an approach that does not rely on major investment in structured, heavy-weight database systems for data storage or heavy-weight middleware responsible for integrated access. The approach is centered around pushing any required data structure and semantics functionality (schema) to application clients, as well as pushing integration specification and functionality to clients where integration can be performed on-the-fly .
Improving building performance using smart building concept: Benefit cost ratio comparison
NASA Astrophysics Data System (ADS)
Berawi, Mohammed Ali; Miraj, Perdana; Sayuti, Mustika Sari; Berawi, Abdur Rohim Boy
2017-11-01
Smart building concept is an implementation of technology developed in the construction industry throughout the world. However, the implementation of this concept is still below expectations due to various obstacles such as higher initial cost than a conventional concept and existing regulation siding with the lowest cost in the tender process. This research aims to develop intelligent building concept using value engineering approach to obtain added value regarding quality, efficiency, and innovation. The research combined quantitative and qualitative approach using questionnaire survey and value engineering method to achieve the research objectives. The research output will show additional functions regarding technology innovation that may increase the value of a building. This study shows that smart building concept requires higher initial cost, but produces lower operational and maintenance costs. Furthermore, it also confirms that benefit-cost ratio on the smart building was much higher than a conventional building, that is 1.99 to 0.88.
NASA Technical Reports Server (NTRS)
Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian
1992-01-01
A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.
NASA Technical Reports Server (NTRS)
Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.
1993-01-01
A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.
Multiple Interactive Pollutants in Water Quality Trading
NASA Astrophysics Data System (ADS)
Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl
2008-10-01
Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.
Cost and effectiveness of lung lobectomy by video-assisted thoracic surgery for lung cancer
Mafé, Juan J.; Planelles, Beatriz; Asensio, Santos; Cerezal, Jorge; Inda, María-del-Mar; Lacueva, Javier; Esteban, Maria-Dolores; Hernández, Luis; Martín, Concepción; Baschwitz, Benno
2017-01-01
Background Video-assisted thoracic surgery (VATS) emerged as a minimally invasive surgery for diseases in the field of thoracic surgery. We herein reviewed our experience on thoracoscopic lobectomy for early lung cancer and evaluated Health System use. Methods A cost-effectiveness study was performed comparing VATS vs. open thoracic surgery (OPEN) for lung cancer patients. Demographic data, tumor localization, dynamic pulmonary function tests [forced vital capacity (FVC), forced expiratory volume in one second (FEV1), diffusion capacity (DLCO) and maximal oxygen uptake (VO2max)], surgical approach, postoperative details, and complications were recorded and analyzed. Results One hundred seventeen patients underwent lung resection by VATS (n=42, 36%; age: 63±9 years old, 57% males) or OPEN (n=75, 64%; age: 61±11 years old, 73% males). Pulmonary function tests decreased just after surgery with a parallel increasing tendency during first 12 months. VATS group tended to recover FEV1 and FVC quicker with significantly less clinical and post-surgical complications (31% vs. 53%, P=0.015). Costs including surgery and associated hospital stay, complications and costs in the 12 months after surgery were significantly lower for VATS (P<0.05). Conclusions The VATS approach surgery allowed earlier recovery at a lower cost than OPEN with a better cost-effectiveness profile. PMID:28932560
ERIC Educational Resources Information Center
Mitchell, Howard M.
1970-01-01
Haphazard training is replaced by an organized conceptual approach to managment development, with attention to managerial functions and activities, appropriate courses, general reading, and training costs. (LY)
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
ERIC Educational Resources Information Center
Barnes, Julie; Jaqua, Kathy
2011-01-01
A kinesthetic approach to developing ideas of function transformations can get students physically and intellectually involved. This article presents low- or no-cost activities which use kinesthetics to support high school students' mathematical understanding of transformations of function graphs. The important point of these activities is to help…
A Cradle-to-Grave Integrated Approach to Using UNIFORMAT II
ERIC Educational Resources Information Center
Schneider, Richard C.; Cain, David A.
2009-01-01
The ASTM E1557/UNIFORMAT II standard is a three-level, function-oriented classification which links the schematic phase Preliminary Project Descriptions (PPD), based on Construction Standard Institute (CSI) Practice FF/180, to elemental cost estimates based on R.S. Means Cost Data. With the UNIFORMAT II Standard Classification for Building…
Space station needs, attributes and architectural options: Study summary
NASA Technical Reports Server (NTRS)
1983-01-01
Space station needs, attributes, and architectural options that affect the future implementation and design of a space station system are examined. Requirements for candidate missions are used to define functional attributes of a space station. Station elements that perform these functions form the basic station architecture. Alternative ways to accomplish these functions are defined and configuration concepts are developed and evaluated. Configuration analyses are carried to the point that budgetary cost estimates of alternate approaches could be made. Emphasis is placed on differential costs for station support elements and benefits that accrue through use of the station.
Al, Maiwenn J; Feenstra, Talitha L; Hout, Ben A van
2005-07-01
This paper addresses the problem of how to value health care programmes with different ratios of costs to effects, specifically when taking into account that these costs and effects are uncertain. First, the traditional framework of maximising health effects with a given health care budget is extended to a flexible budget using a value function over money and health effects. Second, uncertainty surrounding costs and effects is included in the model using expected utility. Other approaches to uncertainty that do not specify a utility function are discussed and it is argued that these also include implicit notions about risk attitude.
The discrete adjoint method for parameter identification in multibody system dynamics.
Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin
2018-01-01
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.
MESA - A new approach to low cost scientific spacecraft
NASA Astrophysics Data System (ADS)
Keyes, G. W.; Case, C. M.
1982-09-01
Today, the greatest obstacle to science and exploration in space is its cost. The present investigation is concerned with approaches for reducing this cost. Trends in the scientific spacecraft market are examined, and a description is presented for the MESA space platform concept. The cost drivers are considered, taking into account planning, technical aspects, and business factors. It is pointed out that the primary function of the MESA concept is to provide a satellite system at the lowest possible price. In order to reach this goal an attempt is made to benefit from all of the considered cost drivers. It is to be tried to work with the customer early in the mission analysis stage in order to assist in finding the right compromise between mission cost and return. A three phase contractual arrangement is recommended for MESA platforms. The phases are related to mission feasibility, specification definition, and design and development. Modular kit design promotes flexibility at low cost.
Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
NASA Technical Reports Server (NTRS)
Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.
1992-01-01
The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Computational approaches for drug discovery.
Hung, Che-Lun; Chen, Chi-Chun
2014-09-01
Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs. © 2014 Wiley Periodicals, Inc.
Fairness in optimizing bus-crew scheduling process.
Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei
2017-01-01
This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Mays, Glen P; Au, Melanie; Claxton, Gary
2007-01-01
Disease management (DM) approaches survived the 1990s backlash against managed care because of their potential for consumer-friendly cost containment, but purchasers have been cautious about investing heavily in them because of uncertainty about return on investment. This study examines how private-sector approaches to DM have evolved over the past two years in the midst of the movement toward consumer-driven health care. Findings indicate that these programs have become standard features of health plan design, despite a thin evidence base concerning their effectiveness. Uncertainties remain regarding how well these programs will function within benefit designs that require higher consumer cost sharing.
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Cost-effectiveness of a classification-based system for sub-acute and chronic low back pain.
Apeldoorn, Adri T; Bosmans, Judith E; Ostelo, Raymond W; de Vet, Henrica C W; van Tulder, Maurits W
2012-07-01
Identifying relevant subgroups in patients with low back pain (LBP) is considered important to guide physical therapy practice and to improve outcomes. The aim of the present study was to assess the cost-effectiveness of a modified version of Delitto's classification-based treatment approach compared with usual physical therapy care in patients with sub-acute and chronic LBP with 1 year follow-up. All patients were classified using the modified version of Delitto's classification-based system and then randomly assigned to receive either classification-based treatment or usual physical therapy care. The main clinical outcomes measured were; global perceived effect, intensity of pain, functional disability and quality of life. Costs were measured from a societal perspective. Multiple imputations were used for missing data. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios was estimated using bootstrapping. Cost-effectiveness planes and cost-effectiveness acceptability curves were estimated. In total, 156 patients were included. The outcome analyses showed a significantly better outcome on global perceived effect favoring the classification-based approach, and no differences between the groups on pain, disability and quality-adjusted life-years. Mean total societal costs for the classification-based group were
A fuzzy inventory model with acceptable shortage using graded mean integration value method
NASA Astrophysics Data System (ADS)
Saranya, R.; Varadarajan, R.
2018-04-01
In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.
An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Sriyanyong, Pichet
2011-06-01
Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
NASA Astrophysics Data System (ADS)
Baharin, Roziana; Isa, Zaidi
2013-04-01
This paper focuses on the Stochastic cost Frontier Analysis (SFA) approach, in an attempt to measure the relationship between efficiency and organizational structure for Takaful and insurance operators in Malaysia's dual financial system. This study applied a flexible cost functional form i.e., Fourier Flexible Functional Form, for a sample consisting of 19 firms, chosen between 2002 and 2010, by employing the Battese and Coelli invariant efficiency model. The findings show that on average, there is a significant difference in cost efficiency between the Takaful industry and the insurance industry. It was found that Takaful has lower cost efficiency than conventional insurance, which shows that the organization form has an influence on efficiency. Overall, it was observed that the level of efficiency scores for both life insurance and family Takaful do not vary across time.
A cost-analysis of two approaches to infection control in a lung function laboratory.
Side, E A; Harrington, G; Thien, F; Walters, E H; Johns, D P
1999-02-01
The Thoracic Society of Australia and New Zealand (TSANZ) guidelines for infection control in respiratory laboratories are based on a 'Universal Precautions' approach to patient care. This requires that one-way breathing valves, flow sensors, and other items, be cleaned and disinfected between patient use. However, this is impractical in a busy laboratory. The recent introduction of disposable barrier filters may provide a practical solution to this problem, although most consider this approach to be an expensive option. To compare the cost of implementing the TSANZ infection control guidelines with the cost of using disposable barrier filters. Costs were based on the standard tests and equipment currently used in the lung function laboratory at The Alfred Hospital. We have assumed that a barrier filter offers the same degree of protection against cross-infection between patients as the TSANZ infection control guidelines. Time and motion studies were performed on the dismantling, cleaning, disinfecting, reassembling and re-calibrating of equipment. Conservative estimates were made as to the frequency of replacing pneumotachographs and rubber mouthpieces based on previous equipment turnover. Labour costs for a scientist to reprocess the equipment was based on $20.86/hour. The cost of employing a casual cleaner at an hourly rate of $14.07 to assist in reprocessing equipment was also investigated. The new high efficiency HyperFilter disposable barrier filter, costing $2.95 was used in this cost-analysis. The cost of reprocessing equipment required for spirometry alone was $17.58 per test if a scientist reprocesses the equipment, and $15.56 per test if a casual cleaner is employed to assist the scientist in performing these duties. In contrast, using a disposable filter would cost only $2.95 per test. Using a filter was considerably less expensive than following the TSANZ guidelines for all tests and equipment used in this cost-analysis. The TSANZ infection control guidelines are expensive and impractical to implement. However, disposable barrier filters provide a practical and inexpensive method of infection control.
Resting-State Functional Connectivity Underlying Costly Punishment: A Machine-Learning Approach.
Feng, Chunliang; Zhu, Zhiyuan; Gu, Ruolei; Wu, Xia; Luo, Yue-Jia; Krueger, Frank
2018-06-08
A large number of studies have demonstrated costly punishment to unfair events across human societies. However, individuals exhibit a large heterogeneity in costly punishment decisions, whereas the neuropsychological substrates underlying the heterogeneity remain poorly understood. Here, we addressed this issue by applying a multivariate machine-learning approach to compare topological properties of resting-state brain networks as a potential neuromarker between individuals exhibiting different punishment propensities. A linear support vector machine classifier obtained an accuracy of 74.19% employing the features derived from resting-state brain networks to distinguish two groups of individuals with different punishment tendencies. Importantly, the most discriminative features that contributed to the classification were those regions frequently implicated in costly punishment decisions, including dorsal anterior cingulate cortex (dACC) and putamen (salience network), dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (mentalizing network), and lateral prefrontal cortex (central-executive network). These networks are previously implicated in encoding norm violation and intentions of others and integrating this information for punishment decisions. Our findings thus demonstrated that resting-state functional connectivity (RSFC) provides a promising neuromarker of social preferences, and bolster the assertion that human costly punishment behaviors emerge from interactions among multiple neural systems. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadjadi, Seyed Jafar; Hamidi Hesarsorkh, Aghil; Mohammadi, Mehdi; Bonyadi Naeini, Ali
2015-06-01
Coordination and harmony between different departments of a company can be an important factor in achieving competitive advantage if the company corrects alignment between strategies of different departments. This paper presents an integrated decision model based on recent advances of geometric programming technique. The demand of a product considers as a power function of factors such as product's price, marketing expenditures, and consumer service expenditures. Furthermore, production cost considers as a cubic power function of outputs. The model will be solved by recent advances in convex optimization tools. Finally, the solution procedure is illustrated by numerical example.
Planning in the Continuous Operations Environment of the International Space Station
NASA Technical Reports Server (NTRS)
Maxwell, Theresa; Hagopian, Jeff
1996-01-01
The continuous operation planning approach developed for the operations planning of the International Space Station (ISS) is reported on. The approach was designed to be a robust and cost-effective method. It separates ISS planning into two planning functions: long-range planning for a fixed length planning horizon which continually moves forward as ISS operations progress, and short-range planning which takes a small segment of the long-range plan and develops a detailed operations schedule. The continuous approach is compared with the incremental approach, the short and long-range planning functions are described, and the benefits and challenges of implementing a continuous operations planning approach for the ISS are summarized.
Cost analysis of incidental durotomy in spine surgery.
Nandyala, Sreeharsha V; Elboghdady, Islam M; Marquez-Lara, Alejandro; Noureldin, Mohamed N B; Sankaranarayanan, Sriram; Singh, Kern
2014-08-01
Retrospective database analysis. To characterize the consequences of an incidental durotomy with regard to perioperative complications and total hospital costs. There is a paucity of data regarding how an incidental durotomy and its associated complications may relate to total hospital costs. The Nationwide Inpatient Sample database was queried from 2008 to 2011. Patients who underwent cervical or lumbar decompression and/or fusion procedures were identified, stratified by approach, and separated into cohorts based on a documented intraoperative incidental durotomy. Patient demographics, comorbidities (Charlson Comorbidity Index), length of hospital stay, perioperative outcomes, and costs were assessed. Analysis of covariance and multivariate linear regression were used to assess the adjusted mean costs of hospitalization as a function of durotomy. The incidental durotomy rate in cervical and lumbar spine surgery is 0.4% and 2.9%, respectively. Patients with an incidental durotomy incurred a longer hospitalization and a greater incidence of perioperative complications including hematoma and neurological injury (P < 0.001). Regression analysis demonstrated that a cervical durotomy and its postoperative sequelae contributed an additional adjusted $7638 (95% confidence interval, 6489-8787; P < 0.001) to the total hospital costs. Similarly, lumbar durotomy contributed an additional adjusted $2412 (95% confidence interval, 1920-2902; P < 0.001) to the total hospital costs. The approach-specific procedural groups demonstrated similar discrepancies in the mean total hospital costs as a function of durotomy. This analysis of the Nationwide Inpatient Sample database demonstrates that incidental durotomies increase hospital resource utilization and costs. In addition, it seems that a cervical durotomy and its associated complications carry a greater financial burden than a lumbar durotomy. Further studies are warranted to investigate the long-term financial implications of incidental durotomies in spine surgery and to reduce the costs associated with this complication. 3.
Wafer integrated micro-scale concentrating photovoltaics
NASA Astrophysics Data System (ADS)
Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun
2017-09-01
Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems.
Thermal treatment of the minority game
NASA Astrophysics Data System (ADS)
Burgos, E.; Ceva, Horacio; Perazzo, R. P.
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
Thermal treatment of the minority game.
Burgos, E; Ceva, Horacio; Perazzo, R P J
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
ERIC Educational Resources Information Center
Reuben, David B.; Seeman, Teresa E.; Keeler, Emmett; Hayes, Risa P.; Bowman, Lee; Sewall, Ase; Hirsch, Susan H.; Wallace, Robert B.; Guralnik, Jack M.
2004-01-01
Purpose: We determined the prognostic value of self-reported and performance-based measurement of function, including functional transitions and combining different measurement approaches, on utilization. Design and Methods: Our cohort study used the 6th, 7th, and 10th waves of three sites of the Established Populations for Epidemiologic Studies…
Chaotic simulated annealing by a neural network with a variable delay: design and application.
Chen, Shyan-Shiou
2011-10-01
In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo
2006-03-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.
Sandra, Dasha A; Otto, A Ross
2018-03-01
While psychological, economic, and neuroscientific accounts of behavior broadly maintain that people minimize expenditure of cognitive effort, empirical work reveals how reward incentives can mobilize increased cognitive effort expenditure. Recent theories posit that the decision to expend effort is governed, in part, by a cost-benefit tradeoff whereby the potential benefits of mental effort can offset the perceived costs of effort exertion. Taking an individual differences approach, the present study examined whether one's executive function capacity, as measured by Stroop interference, predicts the extent to which reward incentives reduce switch costs in a task-switching paradigm, which indexes additional expenditure of cognitive effort. In accordance with the predictions of a cost-benefit account of effort, we found that a low executive function capacity-and, relatedly, a low intrinsic motivation to expend effort (measured by Need for Cognition)-predicted larger increase in cognitive effort expenditure in response to monetary reward incentives, while individuals with greater executive function capacity-and greater intrinsic motivation to expend effort-were less responsive to reward incentives. These findings suggest that an individual's cost-benefit tradeoff is constrained by the perceived costs of exerting cognitive effort. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nelson, James H.; Callan, Daniel R.
1985-01-01
To establish consistency and visibility within the Orbital Transfer Vehicle (OTV) program, a preliminary work breakdown structure (WBS) and dictionary were developed. The dictionary contains definitions of terms to be used in conjunction with the WBS so that a clear understanding of the content of the hardware, function, and cost elements may be established. The OTV WBS matrix is a two-dimensional structure which shows the interrelationship of these dimensions: the hardware elements dimension and the phase and function dimension. The dimension of time cannot be shown graphically, but must be considered. Each cost entry varies with time so that it is necessary to know these cost values by year for budget planning and approval as well as for establishing cost streams for discounting purposes in the economic analysis. While a multiple dimensional approach may at first appear complex, it actually provides benefits which outweigh any concern. This structural interrelationship provides the capability to view and analyze the OTV costs from a number of different financial and management aspects. Cost may be summed by hardware groupings, phases, or functions. The WBS may be used in a number of dimensional or single listing format applications.
Determining Functional Reliability of Pyrotechnic Mechanical Devices
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Multhaup, Herbert A.
1997-01-01
This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.
Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos
2018-01-15
In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
A programmatic approach to long-term bridge preventive maintenance.
DOT National Transportation Integrated Search
2016-04-15
State transportation agencies use cost-effective preventive maintenance (PM) programs to preserve existing roadway systems, slow down their deterioration, and improve their functional condition. Currently, KYTCs bridge inventory includes approxima...
How Costly is Hospital Quality? A Revealed-Preference Approach*
Romley, John A.; Goldman, Dana P.
2013-01-01
We analyze the cost of quality improvement in hospitals, dealing with two challenges. Hospital quality is multidimensional and hard to measure, while unobserved productivity may influence quality supply. We infer the quality of hospitals in Los Angeles from patient choices. We then incorporate ‘revealed quality’ into a cost function, instrumenting with hospital demand. We find that revealed quality differentiates hospitals, but is not strongly correlated with clinical quality. Revealed quality is quite costly, and tends to increase with hospital productivity. Thus, non-clinical aspects of the hospital experience (perhaps including patient amenities) play important roles in hospital demand, competition, and costs. PMID:22299199
The cost of a small membrane bioreactor.
Lo, C H; McAdam, E; Judd, S
2015-01-01
The individual cost contributions to the mechanical components of a small membrane bioreactor (MBR) (100-2,500 m3/d flow capacity) are itemised and collated to generate overall capital and operating costs (CAPEX and OPEX) as a function of size. The outcomes are compared to those from previously published detailed cost studies provided for both very small containerised plants (<40 m3/day capacity) and larger municipal plants (2,200-19,000 m3/d). Cost curves, as a function of flow capacity, determined for OPEX, CAPEX and net present value (NPV) based on the heuristic data used indicate a logarithmic function for OPEX and a power-based one for the CAPEX. OPEX correlations were in good quantitative agreement with those reported in the literature. Disparities in the calculated CAPEX trend compared with reported data were attributed to differences in assumptions concerning cost contributions. More reasonable agreement was obtained with the reported membrane separation component CAPEX data from published studies. The heuristic approach taken appears appropriate for small-scale MBRs with minimal costs associated with installation. An overall relationship of net present value=(a tb)Q(-c lnt+d) was determined for the net present value where a=1.265, b=0.44, c=0.00385 and d=0.868 according to the dataset employed for the analysis.
Opportunities in SMR Emergency Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moe, Wayne L.
2014-10-01
Using year 2014 cost information gathered from twenty different locations within the current commercial nuclear power station fleet, an assessment was performed concerning compliance costs associated with the offsite emergency Planning Standards contained in 10 CFR 50.47(b). The study was conducted to quantitatively determine the potential cost benefits realized if an emergency planning zone (EPZ) were reduced in size according to the lowered risks expected to accompany small modular reactors (SMR). Licensees are required to provide a technical basis when proposing to reduce the surrounding EPZ size to less than the 10 mile plume exposure and 50 mile ingestion pathwaymore » distances currently being used. To assist licensees in assessing the savings that might be associated with such an action, this study established offsite emergency planning costs in connection with four discrete EPZ boundary distances, i.e., site boundary, 2 miles, 5 miles and 10 miles. The boundary selected by the licensee would be based on where EPA Protective Action Guidelines are no longer likely to be exceeded. Additional consideration was directed towards costs associated with reducing the 50 mile ingestion pathway EPZ. The assessment methodology consisted of gathering actual capital costs and annual operating and maintenance costs for offsite emergency planning programs at the surveyed sites, partitioning them according to key predictive factors, and allocating those portions to individual emergency Planning Standards as a function of EPZ size. Two techniques, an offsite population-based approach and an area-based approach, were then employed to calculate the scaling factors which enabled cost projections as a function of EPZ size. Site-specific factors that influenced source data costs, such as the effects of supplemental funding to external state and local agencies for offsite response organization activities, were incorporated into the analysis to the extent those factors could be representatively apportioned.« less
Weight and the Future of Space Flight Hardware Cost Modeling
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2003-01-01
Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.
Gajana Bhat; John Bergsrom; R. Jeff Teasley
1998-01-01
This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such...
Thousands of chemicals for which limited toxicological data are available are used and then detected in humans and the environment. Rapid and cost-effective approaches for assessing the toxicological properties of chemicals are needed. We used CRISPR-Cas9 functional genomic scree...
A programmatic approach to long-term bridge preventive maintenance : final report.
DOT National Transportation Integrated Search
2016-04-01
State transportation agencies use cost-effective preventive maintenance (PM) programs to preserve existing roadway systems, slow down their deterioration, and improve their functional condition. Currently, KYTCs bridge inventory includes approxima...
Replica Approach for Minimal Investment Risk with Cost
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-06-01
In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.
Production of ELZM mirrors: performance coupled with attractive schedule, cost, and risk factors
NASA Astrophysics Data System (ADS)
Leys, Antoine; Hull, Tony; Westerhoff, Thomas
2016-08-01
Extreme light weighted ZERODUR Mirrors (ELZM) have been developed to exploit the superb thermal characteristics of ZERODUR. Coupled with up to date mechanical and optical fabrication methods this becomes an attractive technical approach. However the process of making mirror substrates has demonstrated to be unusually rapid and especially cost-effective. ELZM is aimed at the knee of the cost as a function of light weighting curve. ELZM mirrors are available at 88% light weighted. Together with their low risk, low cost production methods, this is presented as a strong option for NASA Explorer and Probe class missions.
Principles of light harvesting from single photosynthetic complexes.
Schlau-Cohen, G S
2015-06-06
Photosynthetic systems harness sunlight to power most life on Earth. In the initial steps of photosynthetic light harvesting, absorbed energy is converted to chemical energy with near-unity quantum efficiency. This is achieved by an efficient, directional and regulated flow of energy through a network of proteins. Here, we discuss the following three key principles of this flow and of photosynthetic light harvesting: thermal fluctuations of the protein structure; intrinsic conformational switches with defined functional consequences; and environmentally triggered conformational switches. Through these principles, photosynthetic systems balance two types of operational costs: metabolic costs, or the cost of maintaining and running the molecular machinery, and opportunity costs, or the cost of losing any operational time. Understanding how the molecular machinery and dynamics are designed to balance these costs may provide a blueprint for improved artificial light-harvesting devices. With a multi-disciplinary approach combining knowledge of biology, this blueprint could lead to low-cost and more effective solar energy conversion. Photosynthetic systems achieve widespread light harvesting across the Earth's surface; in the face of our growing energy needs, this is functionality we need to replicate, and perhaps emulate.
Traffic routing for multicomputer networks with virtual cut-through capability
NASA Technical Reports Server (NTRS)
Kandlur, Dilip D.; Shin, Kang G.
1992-01-01
Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.
A New Approach to Hospital Cost Functions and Some Issues in Revenue Regulation
Friedman, Bernard; Pauly, Mark V.
1983-01-01
An important aspect of hospital revenue regulation at the State level is the use of retroactive allowances for changes in the volume of service. Arguments favoring non-proportional allowances have been based on statistical studies of marginal cost, together with concerns about fairness toward non-profit enterprises or concerns about various inflationary biases in hospital management. This article attempts to review and clarify the regulatory issues and choices, with the aid of new econometric work that explicitly allows for the effects of transitory as well as expected demand changes on hospital expense. The present analysis is also novel in treating length of stay as an endogenous variable in cost functions. We analyzed cost variation for a panel of over 800 hospitals that reported monthly to Hospital Administrative Services between 1973 and 1978. The central results are that marginal cost of unexpected admissions is about half of average cost, while marginal cost of forecasted admissions is about equal to average cost. We obtained relatively low estimates of the cost of an “empty bed.” The study tends to support proportional volume allowances in revenue regulation programs, with perhaps a residual role for selective case review. PMID:10309853
Can deficits in social problem-solving in people with personality disorder be reversed?
Crawford, M J
2007-04-01
Research evidence is beginning to emerge that social problem-solving can improve the social functioning of people with personality disorder. This approach is particularly important because it may be relatively easy to train healthcare workers to deliver this intervention. However, the costs and cost-effectiveness of social problem-solving need to be established if it is to be made more widely available.
1982-12-01
9 2 Criticality of Cadet Training Objectives .............................................. 10 3 Simulator Best, High ...simu- " The already high costs associated with at-sea training lator within the multiple media approach to cadet training have been escalating...Bridge Procedures. that color is desirable for high workloads; the additional cost for multicolor under nighttime conditions may not " Simulator
A machine learning approach for efficient uncertainty quantification using multiscale methods
NASA Astrophysics Data System (ADS)
Chan, Shing; Elsheikh, Ahmed H.
2018-02-01
Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
Larson, Trent; Gudavalli, Ravindra; Prater, Dean; Sutton, Scott
2015-04-01
Respiratory inhalers constitute a large percentage of hospital pharmacy expenditures. Metered-dose inhaler (MDI) canisters usually contain enough medication to last 2 to 4 weeks, while the average hospital stay for acute hospitalizations of respiratory illnesses is only 4-5 days. Hospital pharmacies are often unable to operationalize relabeling of inhalers at discharge to meet regulatory requirements. This dilemma produces drug wastage. The common canister (CC) approach is a method some hospitals implemented in an effort to minimize the costs associated with this issue. The CC program uses a shared inhaler, an individual one-way valve holding chamber, and a cleaning protocol. This approach has been the subject of considerable controversy. Proponents of the CC approach reported considerable cost savings to their institutions. Opponents of the CC approach are not convinced the benefits outweigh even a minimal risk of cross-contamination since adherence to protocols for hand washing and disinfection of the MDI device cannot be guaranteed to be 100% (pathogens from contaminated devices can enter the respiratory tract through inhalation). Other cost containment strategies, such as unit dose nebulizers, may be useful to realize similar reductions in pharmacy drug costs while minimizing the risks of nosocomial infections and their associated medical costs. The CC strategy may be appropriate for some hospital pharmacies that face budget constraints, but a full evaluation of the risks, benefits, and potential costs should guide those who make hospital policy decisions.
A demonstration of a low cost approach to security at shipping facilities and ports
NASA Astrophysics Data System (ADS)
Huck, Robert C.; Al Akkoumi, Mouhammad K.; Herath, Ruchira W.; Sluss, James J., Jr.; Radhakrishnan, Sridhar; Landers, Thomas L.
2010-04-01
Government funding for the security at shipping facilities and ports is limited so there is a need for low cost scalable security systems. With over 20 million sea, truck, and rail containers entering the United States every year, these facilities pose a large risk to security. Securing these facilities and monitoring the variety of traffic that enter and leave is a major task. To accomplish this, the authors have developed and fielded a low cost fully distributed building block approach to port security at the inland Port of Catoosa in Oklahoma. Based on prior work accomplished in the design and fielding of an intelligent transportation system in the United States, functional building blocks, (e.g. Network, Camera, Sensor, Display, and Operator Console blocks) can be assembled, mixed and matched, and scaled to provide a comprehensive security system. The following functions are demonstrated and scaled through analysis and demonstration: Barge tracking, credential checking, container inventory, vehicle tracking, and situational awareness. The concept behind this research is "any operator on any console can control any device at any time."
Successful and stable orthodontic camouflage of a mandibular asymmetry with sliding jigs.
Oliveira, Dauro Douglas; Oliveira, Bruno Franco de; Mordente, Carolina Morsani; Godoy, Gabriela Martins; Soares, Rodrigo Villamarim; Seraidarian, Paulo Isaías
2018-03-12
The purpose of this paper is to present and discuss a simple and low-cost clinical approach to correct an asymmetric skeletal Class III combined to an extensive dental open bite that significantly compromised the occlusal function and smile aesthetics of an adult male patient. The patient did not accept the idealistic surgical-orthodontic treatment option, neither the use of temporary anchorage devices to facilitate the camouflage of the asymmetrical skeletal Class III/open bite. Therefore, a very simple and inexpensive biomechanical approach using sliding jigs in the mandibular arch was implemented as the compensatory treatment of the malocclusion. Although minor enhancements in facial aesthetics were obtained, the occlusal function and dental aesthetics were significantly improved. Furthermore, the patient was very satisfied with his new smile appearance. Some advantages of this treatment option included the small invasiveness and the remarkably low financial costs involved. Moreover, the final results fulfilled all realistic treatment objectives and the patient's expectations. Results remained stable 5 years post-treatment demonstrating that excellent results can be obtained when simple and low cost, but well-controlled mechanics are conducted.
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
Software Review. Macintosh Laboratory Automation: Three Software Packages.
ERIC Educational Resources Information Center
Jezl, Barbara Ann
1990-01-01
Reviewed are "LABTECH NOTEBOOK,""LabVIEW," and "Parameter Manager pmPLUS/pmTALK." Each package is described including functions, uses, hardware, and costs. Advantages and disadvantages of this type of laboratory approach are discussed. (CW)
A stochastic optimal feedforward and feedback control methodology for superagility
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.
1992-01-01
A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
NASA Astrophysics Data System (ADS)
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
The private duty market: operational and staffing considerations.
Considine, C J
2000-01-01
Wouldn't it be nice to get away from Medicare home health regulations and into a business where the provider simply needs an understanding of the difference between a revenue and an expense? Cost reports, disallowances, allocations, cost shifts, and cost limits become a thing of the past. This article will help you plan and evaluate your approach to developing a private duty business. An overview of the market, operational considerations, and a discussion of various key staff positions and their functions allows you to explore this opportunity from many vantage points.
An inverse model for a free-boundary problem with a contact line: Steady case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, Oleg; Protas, Bartosz
2009-07-20
This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less
Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.
Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry
2011-10-01
An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.
NASA Technical Reports Server (NTRS)
1975-01-01
The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
Managing biotechnology in a network-model health plan: a U.S. private payer perspective.
Watkins, John B; Choudhury, Sanchita Roy; Wong, Ed; Sullivan, Sean D
2006-01-01
Emerging biotechnology poses challenges to payers, including access, coverage, reimbursement, patient selection, and affordability. Premera Blue Cross, a private regional health plan, developed an integrated cross-functional approach to managing biologics, built around a robust formulary process that is fast, flexible, fair, and transparent to stakeholders. Results are monitored by cost and use reporting from merged pharmacy and medical claims. Utilization management and case management strategies will integrate with specialty pharmacy programs to improve outcomes and cost-effectiveness. Creative approaches to provider reimbursement can align providers' incentives with those of the plan. Redesign of member benefits can also encourage appropriate use of biotechnology.
Models for forecasting energy use in the US farm sector
NASA Astrophysics Data System (ADS)
Christensen, L. R.
1981-07-01
Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.
Miri, Mohammad Saleh; Abràmoff, Michael D; Kwon, Young H; Sonka, Milan; Garvin, Mona K
2017-07-01
Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches. Published by Elsevier B.V.
Framework for evaluating disease severity measures in older adults with comorbidity.
Boyd, Cynthia M; Weiss, Carlos O; Halter, Jeff; Han, K Carol; Ershler, William B; Fried, Linda P
2007-03-01
Accounting for the influence of concurrent conditions on health and functional status for both research and clinical decision-making purposes is especially important in older adults. Although approaches to classifying severity of individual diseases and conditions have been developed, the utility of these classification systems has not been evaluated in the presence of multiple conditions. We present a framework for evaluating severity classification systems for common chronic diseases. The framework evaluates the: (a) goal or purpose of the classification system; (b) physiological and/or functional criteria for severity graduation; and (c) potential reliability and validity of the system balanced against burden and costs associated with classification. Approaches to severity classification of individual diseases were not originally conceived for the study of comorbidity. Therefore, they vary greatly in terms of objectives, physiological systems covered, level of severity characterization, reliability and validity, and costs and burdens. Using different severity classification systems to account for differing levels of disease severity in a patient with multiple diseases, or, assessing global disease burden may be challenging. Most approaches to severity classification are not adequate to address comorbidity. Nevertheless, thoughtful use of some existing approaches and refinement of others may advance the study of comorbidity and diagnostic and therapeutic approaches to patients with multimorbidity.
A parallel approach of COFFEE objective function to multiple sequence alignment
NASA Astrophysics Data System (ADS)
Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.
2015-09-01
The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.
Langhans, Simone D; Hermoso, Virgilio; Linke, Simon; Bunn, Stuart E; Possingham, Hugh P
2014-01-01
River rehabilitation aims to protect biodiversity or restore key ecosystem services but the success rate is often low. This is seldom because of insufficient funding for rehabilitation works but because trade-offs between costs and ecological benefits of management actions are rarely incorporated in the planning, and because monitoring is often inadequate for managers to learn by doing. In this study, we demonstrate a new approach to plan cost-effective river rehabilitation at large scales. The framework is based on the use of cost functions (relationship between costs of rehabilitation and the expected ecological benefit) to optimize the spatial allocation of rehabilitation actions needed to achieve given rehabilitation goals (in our case established by the Swiss water act). To demonstrate the approach with a simple example, we link costs of the three types of management actions that are most commonly used in Switzerland (culvert removal, widening of one riverside buffer and widening of both riversides) to the improvement in riparian zone quality. We then use Marxan, a widely applied conservation planning software, to identify priority areas to implement these rehabilitation measures in two neighbouring Swiss cantons (Aargau, AG and Zürich, ZH). The best rehabilitation plans identified for the two cantons met all the targets (i.e. restoring different types of morphological deficits with different actions) rehabilitating 80,786 m (AG) and 106,036 m (ZH) of the river network at a total cost of 106.1 Million CHF (AG) and 129.3 Million CH (ZH). The best rehabilitation plan for the canton of AG consisted of more and better connected sub-catchments that were generally less expensive, compared to its neighbouring canton. The framework developed in this study can be used to inform river managers how and where best to spend their rehabilitation budget for a given set of actions, ensures the cost-effective achievement of desired rehabilitation outcomes, and helps towards estimating total costs of long-term rehabilitation activities. Rehabilitation plans ready to be implemented may be based on additional aspects to the ones considered here, e.g., specific cost functions for rural and urban areas and/or for large and small rivers, which can simply be added to our approach. Optimizing investments in this way will ultimately increase the likelihood of on-ground success of rehabilitation activities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cost-effectiveness of exercise and diet in overweight and obese adults with knee osteoarthritis.
Sevick, Mary A; Miller, Gary D; Loeser, Richard F; Williamson, Jeff D; Messier, Stephen P
2009-06-01
The purpose of this study was to compare the cost-effectiveness of dietary and exercise interventions in overweight or obese elderly patients with knee osteoarthritis (OA) enrolled in the Arthritis, Diet, and Physical Activity Promotion Trial (ADAPT). ADAPT was a single-blinded, controlled trial of 316 adults with knee OA, randomized to one of four groups: Healthy Lifestyle Control group, Diet group, Exercise group, or Exercise and Diet group. A cost analysis was performed from a payer perspective, incorporating those costs and benefits that would be realized by a managed care organization interested in maintaining the health and satisfaction of its enrollees while reducing unnecessary utilization of health care services. The Diet intervention was most cost-effective for reducing weight, at $35 for each percentage point reduction in baseline body weight. The Exercise intervention was most cost-effective for improving mobility, costing $10 for each percentage point improvement in a 6-min walking distance and $9 for each percentage point improvement in the timed stair climbing task. The Exercise and Diet intervention was most cost-effective for improving self-reported function and symptoms of arthritis, costing $24 for each percentage point improvement in subjective function, $20 for each percentage point improvement in self-reported pain, and $56 for each percentage point improvement in self-reported stiffness. The Exercise and Diet intervention consistently yielded the greatest improvements in weight, physical performance, and symptoms of knee OA. However, it was also the most expensive and was the most cost-effective approach only for the subjective outcomes of knee OA (self-reported function, pain, and stiffness). Perceived function and symptoms of knee OA are likely to be stronger drivers of downstream health service utilization than weight, or objective performance measures and may be the most cost-effective in the long term.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Joint brain connectivity estimation from diffusion and functional MRI data
NASA Astrophysics Data System (ADS)
Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.
2015-03-01
Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks. Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).
Comparative genomics approaches to understanding and manipulating plant metabolism.
Bradbury, Louis M T; Niehaus, Tom D; Hanson, Andrew D
2013-04-01
Over 3000 genomes, including numerous plant genomes, are now sequenced. However, their annotation remains problematic as illustrated by the many conserved genes with no assigned function, vague annotations such as 'kinase', or even wrong ones. Around 40% of genes of unknown function that are conserved between plants and microbes are probably metabolic enzymes or transporters; finding functions for these genes is a major challenge. Comparative genomics has correctly predicted functions for many such genes by analyzing genomic context, and gene fusions, distributions and co-expression. Comparative genomics complements genetic and biochemical approaches to dissect metabolism, continues to increase in power and decrease in cost, and has a pivotal role in modeling and engineering by helping identify functions for all metabolic genes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kleindienst, Roman; Kampmann, Ronald; Stoebenau, Sebastian; Sinzinger, Stefan
2011-07-01
The performance of optical systems is typically improved by increasing the number of conventionally fabricated optical components (spheres, aspheres, and gratings). This approach is automatically connected to a system enlargement, as well as potentially higher assembly and maintenance costs. Hybrid optical freeform components can help to overcome this trade-off. They merge several optical functions within fewer but more complex optical surfaces, e.g., elements comprising shallow refractive/reflective and high-frequency diffractive structures. However, providing the flexibility and precision essential for their realization is one of the major challenges in the field of optical component fabrication. In this article we present tailored integrated machining techniques suitable for rapid prototyping as well as the fabrication of molding tools for low-cost mass replication of hybrid optical freeform components. To produce the different feature sizes with optical surface quality, we successively combine mechanical machining modes (ultraprecision micromilling and fly cutting) with precisely aligned direct picosecond laser ablation in an integrated fabrication approach. The fabrication accuracy and surface quality achieved by our integrated fabrication approach are demonstrated with profilometric measurements and experimental investigations of the optical performance.
Game Theoretic Approaches to Protect Cyberspace
2010-04-20
security problems. 3.1 Definitions Game A description of the strategic interaction between opposing, or co-operating, interests where the con ...that involves probabilistic transitions through several states of the system. The game pro - gresses as a sequence of states. The game begins with a...eventually leads to a discretized model. The reaction functions uniquely minimize the strictly con - vex cost functions. After discretization, this
Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh
2008-01-01
This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...
Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh
2008-01-01
This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...
Functional patterned coatings by thin polymer film dewetting.
Telford, Andrew M; Thickett, Stuart C; Neto, Chiara
2017-12-01
An approach for the fabrication of functional polymer surface coatings is introduced, where micro-scale structure and surface functionality are obtained by means of self-assembly mechanisms. We illustrate two main applications of micro-patterned polymer surfaces obtained through dewetting of bilayers of thin polymer films. By tuning the physical and chemical properties of the polymer bilayers, micro-patterned surface coatings could be produced that have applications both for the selective attachment and patterning of proteins and cells, with potential applications as biomaterials, and for the collection of water from the atmosphere. In all cases, the aim is to achieve functional coatings using approaches that are simple to realize, use low cost materials and are potentially scalable. Copyright © 2017 Elsevier Inc. All rights reserved.
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D
2014-01-01
Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.
Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...
2016-08-04
This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
Cost inefficiency in Washington hospitals: a stochastic frontier approach using panel data.
Li, T; Rosenman, R
2001-06-01
We analyze a sample of Washington State hospitals with a stochastic frontier panel data model, specifying the cost function as a generalized Leontief function which, according to a Hausman test, performs better in this case than the translog form. A one-stage FGLS estimation procedure which directly models the inefficiency effects improves the efficiency of our estimates. We find that hospitals with higher casemix indices or more beds are less efficient while for-profit hospitals and those with higher proportion of Medicare patient days are more efficient. Relative to the most efficient hospital, the average hospital is only about 67% efficient.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
A new approach for low-cost noninvasive detection of asymptomatic heart disease at rest.
DeMarzo, Arthur P; Calvin, James E
2007-01-01
It would be useful to have an inexpensive, noninvasive point-of-care test for early detection of asymptomatic heart disease. This study used impedance cardiography (ICG) in a new way to assess heart function that did not use stroke volume or cardiac output. There is a model of the ICG dZ/dt waveform that may be used as a template to represent normal heart function. The hypothesis was that a dZ/dt waveform which deviates from that template should indicate heart dysfunction and therefore heart disease. The objective was to assess the accuracy of this new ICG approach, using echocardiography as the standard. Thirty-four outpatients undergoing echocardiographic testing were tested by ICG while sitting upright and supine. All patients had no symptoms or history of a structural or functional heart disorder. Echocardiographic testing showed 17 patients with abnormalities and 17 as normal. ICG testing yielded 16 true positives for heart dysfunction with 1 false negative (sensitivity = 94%) and 17 true negatives with no false positives (specificity = 100%). Considering that the cost, technical skill, and time required for this ICG test are comparable to those of an electrocardiograph, this new approach has potential as a point-of-care screening test for asymptomatic heart disease.
National law enforcement telecommunications network
NASA Technical Reports Server (NTRS)
Reilly, N. B.; Garrison, G. W.; Sohn, R. L.; Gallop, D. L.; Goldstein, B. L.
1975-01-01
Alternative approaches are analyzed to a National Law Enforcement Telecommunications Network (NALECOM) designed to service all state-to-state and state-to-national criminal justice communications traffic needs in the United States. Network topology options were analyzed, and equipment and personnel requirements for each option were defined in accordance with NALECOM functional specifications and design guidelines. Evaluation criteria were developed and applied to each of the options leading to specific conclusions. Detailed treatments of methods for determining traffic requirements, communication line costs, switcher configurations and costs, microwave costs, satellite system configurations and costs, facilities, operations and engineering costs, network delay analysis and network availability analysis are presented. It is concluded that a single regional switcher configuration is the optimum choice based on cost and technical factors. A two-region configuration is competitive. Multiple-region configurations are less competitive due to increasing costs without attending benefits.
["Activity based costing" in radiology].
Klose, K J; Böttcher, J
2002-05-01
The introduction of diagnosis related groups for reimbursement of hospital services in Germany (g-drg) demands for a reconsideration of utilization of radiological products and costs related to them. Traditional cost accounting as approach to internal, department related budgets are compared with the accounting method of activity based costing (ABC). The steps, which are necessary to implement ABC in radiology are developed. The introduction of a process-oriented cost analysis is feasible for radiology departments. ABC plays a central role in the set-up of decentralized controlling functions within this institutions. The implementation seems to be a strategic challenge for department managers to get more appropriate data for adequate enterprise decisions. The necessary steps of process analysis can be used for other purposes (Certification, digital migration) as well.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
Benefits and costs of low thrust propulsion systems
NASA Technical Reports Server (NTRS)
Robertson, R. I.; Rose, L. J.; Maloy, J. E.
1983-01-01
The results of costs/benefits analyses of three chemical propulsion systems that are candidates for transferring high density, low volume STS payloads from LEO to GEO are reported. Separate algorithms were developed for benefits and costs of primary propulsion systems (PPS) as functions of the required thrust levels. The life cycle costs of each system were computed based on the developmental, production, and deployment costs. A weighted criteria rating approach was taken for the benefits, with each benefit assigned a value commensurate to its relative worth to the overall system. Support costs were included in the costs modeling. Reference missions from NASA, commercial, and DoD catalog payloads were examined. The program was concluded reliable and flexible for evaluating benefits and costs of launch and orbit transfer for any catalog mission, with the most beneficial PPS being a dedicated low thrust configuration using the RL-10 system.
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
A cost-efficiency and health benefit approach to improve urban air quality.
Miranda, A I; Ferreira, J; Silveira, C; Relvas, H; Duque, L; Roebeling, P; Lopes, M; Costa, S; Monteiro, A; Gama, C; Sá, E; Borrego, C; Teixeira, J P
2016-11-01
When ambient air quality standards established in the EU Directive 2008/50/EC are exceeded, Member States are obliged to develop and implement Air Quality Plans (AQP) to improve air quality and health. Notwithstanding the achievements in emission reductions and air quality improvement, additional efforts need to be undertaken to improve air quality in a sustainable way - i.e. through a cost-efficiency approach. This work was developed in the scope of the recently concluded MAPLIA project "Moving from Air Pollution to Local Integrated Assessment", and focuses on the definition and assessment of emission abatement measures and their associated costs, air quality and health impacts and benefits by means of air quality modelling tools, health impact functions and cost-efficiency analysis. The MAPLIA system was applied to the Grande Porto urban area (Portugal), addressing PM10 and NOx as the most important pollutants in the region. Four different measures to reduce PM10 and NOx emissions were defined and characterized in terms of emissions and implementation costs, and combined into 15 emission scenarios, simulated by the TAPM air quality modelling tool. Air pollutant concentration fields were then used to estimate health benefits in terms of avoided costs (external costs), using dose-response health impact functions. Results revealed that, among the 15 scenarios analysed, the scenario including all 4 measures lead to a total net benefit of 0.3M€·y(-1). The largest net benefit is obtained for the scenario considering the conversion of 50% of open fire places into heat recovery wood stoves. Although the implementation costs of this measure are high, the benefits outweigh the costs. Research outcomes confirm that the MAPLIA system is useful for policy decision support on air quality improvement strategies, and could be applied to other urban areas where AQP need to be implemented and monitored. Copyright © 2016. Published by Elsevier B.V.
Learning a cost function for microscope image segmentation.
Nilufar, Sharmin; Perkins, Theodore J
2014-01-01
Quantitative analysis of microscopy images is increasingly important in clinical researchers' efforts to unravel the cellular and molecular determinants of disease, and for pathological analysis of tissue samples. Yet, manual segmentation and measurement of cells or other features in images remains the norm in many fields. We report on a new system that aims for robust and accurate semi-automated analysis of microscope images. A user interactively outlines one or more examples of a target object in a training image. We then learn a cost function for detecting more objects of the same type, either in the same or different images. The cost function is incorporated into an active contour model, which can efficiently determine optimal boundaries by dynamic programming. We validate our approach and compare it to some standard alternatives on three different types of microscopic images: light microscopy of blood cells, light microscopy of muscle tissue sections, and electron microscopy cross-sections of axons and their myelin sheaths.
Application of Particle Swarm Optimization in Computer Aided Setup Planning
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid
2011-01-01
New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.
NASA Astrophysics Data System (ADS)
Lee, Deuk Yeon; Choi, Jae Hong; Shin, Jung Chul; Jung, Man Ki; Song, Seok Kyun; Suh, Jung Ki; Lee, Chang Young
2018-06-01
Compared with wet processes, dry functionalization using plasma is fast, scalable, solvent-free, and thus presents a promising approach for grafting functional groups to powdery nanomaterials. Previous approaches, however, had difficulties in maintaining an intimate sample-plasma contact and achieving uniform functionalization. Here, we demonstrate a plasma reactor equipped with a porous filter electrode that increases both homogeneity and degree of functionalization by capturing and circulating powdery carbon nanotubes (CNTs) via vacuum and gas blowing. Spectroscopic measurements verify that treatment with O2/air plasma generates oxygen-containing groups on the surface of CNTs, with the degree of functionalization readily controlled by varying the circulation number. Gas sensors fabricated using the plasma-treated CNTs confirm alteration of molecular adsorption on the surface of CNTs. A sequential treatment with NH3 plasma following the oxidation pre-treatment results in the functionalization with nitrogen species of up to 3.2 wt%. Our approach requiring no organic solvents not only is cost-effective and environmentally friendly, but also serves as a versatile tool that applies to other powdery micro or nanoscale materials for controlled modification of their surfaces.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Alternate avionics system study and phase B extension
NASA Technical Reports Server (NTRS)
1971-01-01
Results of alternate avionics system studies for the space shuttle are presented that reduce the cost of vehicle avionics without incurring major off-setting costs on the ground. A comprehensive summary is provided of all configurations defined since the completion of the basic Phase B contract and a complete description of the optimized avionics baseline is given. In the new baseline, inflight redundancy management is performed onboard without ground support; utilization of off-the-shelf hardware reduces the cost figure substantially less than for the Phase B baseline. The only functional capability sacrificed in the new approach is automatic landing.
Choosing Models for Health Care Cost Analyses: Issues of Nonlinearity and Endogeneity
Garrido, Melissa M; Deb, Partha; Burgess, James F; Penrod, Joan D
2012-01-01
Objective To compare methods of analyzing endogenous treatment effect models for nonlinear outcomes and illustrate the impact of model specification on estimates of treatment effects such as health care costs. Data Sources Secondary data on cost and utilization for inpatients hospitalized in five Veterans Affairs acute care facilities in 2005–2006. Study Design We compare results from analyses with full information maximum simulated likelihood (FIMSL); control function (CF) approaches employing different types and functional forms for the residuals, including the special case of two-stage residual inclusion; and two-stage least squares (2SLS). As an example, we examine the effect of an inpatient palliative care (PC) consultation on direct costs of care per day. Data Collection/Extraction Methods We analyzed data for 3,389 inpatients with one or more life-limiting diseases. Principal Findings The distribution of average treatment effects on the treated and local average treatment effects of a PC consultation depended on model specification. CF and FIMSL estimates were more similar to each other than to 2SLS estimates. CF estimates were sensitive to choice and functional form of residual. Conclusions When modeling cost or other nonlinear data with endogeneity, one should be aware of the impact of model specification and treatment effect choice on results. PMID:22524165
Hofmann, Douglas C.; Polit-Casillas, Raul; Roberts, Scott N.; Borgonia, John-Paul; Dillon, Robert P.; Hilgemann, Evan; Kolodziejska, Joanna; Montemayor, Lauren; Suh, Jong-ook; Hoff, Andrew; Carpenter, Kalind; Parness, Aaron; Johnson, William L.; Kennett, Andrew; Wilcox, Brian
2016-01-01
The use of bulk metallic glasses (BMGs) as the flexspline in strain wave gears (SWGs), also known as harmonic drives, is presented. SWGs are unique, ultra-precision gearboxes that function through the elastic flexing of a thin-walled cup, called a flexspline. The current research demonstrates that BMGs can be cast at extremely low cost relative to machining and can be implemented into SWGs as an alternative to steel. This approach may significantly reduce the cost of SWGs, enabling lower-cost robotics. The attractive properties of BMGs, such as hardness, elastic limit and yield strength, may also be suitable for extreme environment applications in spacecraft. PMID:27883054
Vasilyev, K N
2013-01-01
When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.
NASA Technical Reports Server (NTRS)
Milman, M. H.
1985-01-01
A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.
A Science Data System Approach for the SMAP Mission
NASA Technical Reports Server (NTRS)
Woollard, David; Kwoun, Oh-ig; Bicknell, Tom; West, Richard; Leung, Kon
2009-01-01
Though Science Data System (SDS) development has not traditionally been part of the mission concept phase, lessons learned and study of past Earth science missions indicate that SDS functionality can greatly benefit algorithm developers in all mission phases. We have proposed a SDS approach for the SMAP Mission that incorporates early support for an algorithm testbed, allowing scientists to develop codes and seamlessly integrate them into the operational SDS. This approach will greatly reduce both the costs and risks involved in algorithm transitioning and SDS development.
Skill Mix, Experience, and Readiness.
1983-10-01
percent less manpower at a life -cycle cost savings of 12 percent. The squadron-effectiveness analysis balances the cost and effec- tiveness of people...Function," Journal of PoliticalEconomy, Vol. 76, 1971 [4] Naval Personnel Research and Development Center, SR 80-7, " Life Cycle Navy Enlisted Billet...Approach"e to Attrition Ieuation Moels," 5 pp., Fob 1962, AD Al 12 462 Matnaganat," 30 PP., Jan I992 AD A112 910 Uiversity of Minnesota PP 331*alhniversity
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Thermal Environment for Classrooms. Central System Approach to Air Conditioning.
ERIC Educational Resources Information Center
Triechler, Walter W.
This speech compares the air conditioning requirements of high-rise office buildings with those of large centralized school complexes. A description of one particular air conditioning system provides information about the system's arrangement, functions, performance efficiency, and cost effectiveness. (MLF)
Linearized self-consistent GW approach satisfying the Ward identity
NASA Astrophysics Data System (ADS)
Kuwahara, Riichi; Ohno, Kaoru
2014-09-01
We propose a linearized self-consistent GW approach satisfying the Ward identity. The vertex function derived from the Ward-Takahashi identity in the limit of q =0 and ω -ω'=0 is included in the self-energy and the polarization function as a consequence of the linearization of the quasiparticle equation. Due to the energy dependence of the self-energy, the Hamiltonian is a non-Hermitian operator and quasiparticle states are nonorthonormal and linearly dependent. However, the linearized quasiparticle states recover orthonormality and fulfill the completeness condition. This approach is very efficient, and the resulting quasiparticle energies are greatly improved compared to the nonlinearized self-consistent GW approach, although its computational cost is not much increased. We show the results for atoms and dimers of Li and Na compared with other approaches. We also propose convenient ways to calculate the Luttinger-Ward functional Φ based on a plasmon-pole model and calculate the total energy for the ground state. As a result, we conclude that the linearization improves overall behaviors in the self-consistent GW approach.
Multimodal Diffuse Optical Imaging
NASA Astrophysics Data System (ADS)
Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.
Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.
Aerodynamic design and optimization in one shot
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.
1992-01-01
This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiao; Dong, Jin; Djouadi, Seddik M
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
Liu, Lei; Wang, Zhanshan; Zhang, Huaguang
2018-04-01
This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.
SMART: The Future of Spaceflight Avionics
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.; Howard, David E.
2010-01-01
A novel avionics approach is necessary to meet the future needs of low cost space and lunar missions that require low mass and low power electronics. The current state of the art for avionics systems are centralized electronic units that perform the required spacecraft functions. These electronic units are usually custom-designed for each application and the approach compels avionics designers to have in-depth system knowledge before design can commence. The overall design, development, test and evaluation (DDT&E) cycle for this conventional approach requires long delivery times for space flight electronics and is very expensive. The Small Multi-purpose Advanced Reconfigurable Technology (SMART) concept is currently being developed to overcome the limitations of traditional avionics design. The SMART concept is based upon two multi-functional modules that can be reconfigured to drive and sense a variety of mechanical and electrical components. The SMART units are key to a distributed avionics architecture whereby the modules are located close to or right at the desired application point. The drive module, SMART-D, receives commands from the main computer and controls the spacecraft mechanisms and devices with localized feedback. The sensor module, SMART-S, is used to sense the environmental sensors and offload local limit checking from the main computer. There are numerous benefits that are realized by implementing the SMART system. Localized sensor signal conditioning electronics reduces signal loss and overall wiring mass. Localized drive electronics increase control bandwidth and minimize time lags for critical functions. These benefits in-turn reduce the main processor overhead functions. Since SMART units are standard flight qualified units, DDT&E is reduced and system design can commence much earlier in the design cycle. Increased production scale lowers individual piece part cost and using standard modules also reduces non-recurring costs. The benefit list continues, but the overall message is already evident: the SMART concept is an evolution in spacecraft avionics. SMART devices have the potential to change the design paradigm for future satellites, spacecraft and even commercial applications.
The Economic Cost of Communicable Disease Surveillance in Local Public Health Agencies.
Atherly, Adam; Whittington, Melanie; VanRaemdonck, Lisa; Lampe, Sarah
2017-12-01
We identify economic costs associated with communicable disease (CD) monitoring/surveillance in Colorado local public health agencies and identify possible economies of scale. Data were collected via a survey of local public health employees engaged in CD work. Survey respondents logged time spent on CD surveillance for 2-week periods in the spring of 2014 and fall of 2014. Forty-three of the 54 local public health agencies in Colorado participated. We used a microcosting approach. We estimated a statistical cost function using cost as a function of the number of reported investigable diseases during the matched 2-week period. We also controlled for other independent variables, including case mix, characteristics of the agency, the community, and services provided. Data were collected from a microcosting survey using time logs. Costs increased at a decreasing rate as cases increased, with both cases (β = 431.5, p < .001) and cases squared (β = -3.62, p = .05) statistically significant. The results of the model suggest economies of scale. Cost per unit is estimated to be one-third lower for high-volume agencies as compared to low-volume agencies. Cost savings could potentially be achieved if smaller agencies shared services. © Health Research and Educational Trust.
Method to fabricate functionalized conical nanopores
Small, Leo J.; Spoerke, Erik David; Wheeler, David R.
2016-07-12
A pressure-based chemical etch method is used to shape polymer nanopores into cones. By varying the pressure, the pore tip diameter can be controlled, while the pore base diameter is largely unaffected. The method provides an easy, low-cost approach for conically etching high density nanopores.
Alternate Waveforms for a Low-Cost Civil Global Positioning System Receiver
DOT National Transportation Integrated Search
1980-06-01
This report examines the technical feasibility of alternate waveforms to perform the GPS functions and to result in less complex receivers than is possible with the GPS C/A waveform. The approach taken to accomplish this objective is (a) to identify,...
ERIC Educational Resources Information Center
Picus, Lawrence O.
2001-01-01
Recent court decisions and legislation show that an adequate school finance formula must provide sufficient money so public schools can teach all students. Four different approaches for determining school finance adequacy are: (1) determining the economic cost of various educational functions; (2) linking spending to performance benchmarks; (3)…
Reserve valuation in electric power systems
NASA Astrophysics Data System (ADS)
Ruiz, Pablo Ariel
Operational reliability is provided in part by scheduling capacity in excess of the load forecast. This reserve capacity balances the uncertain power demand with the supply in real time and provides for equipment outages. Traditionally, reserve scheduling has been ensured by enforcing reserve requirements in the operations planning. An alternate approach is to employ a stochastic formulation, which allows the explicit modeling of the sources of uncertainty. This thesis compares stochastic and reserve methods and evaluates the benefits of a combined approach for the efficient management of uncertainty in the unit commitment problem. Numerical studies show that the unit commitment solutions obtained for the combined approach are robust and superior with respect to the traditional approach. These robust solutions are especially valuable in areas with a high proportion of wind power, as their built-in flexibility allows the dispatch of practically all the available wind power while minimizing the costs of operation. The scheduled reserve has an economic value since it reduces the outage costs. In several electricity markets, reserve demand functions have been implemented to take into account the value of reserve in the market clearing process. These often take the form of a step-down function at the reserve requirement level, and as such they may not appropriately represent the reserve value. The value of reserve is impacted by the reliability, dynamic and stochastic characteristics of system components, the system operation policies, and the economic aspects such as the risk preferences of the demand. In this thesis, these aspects are taken into account to approximate the reserve value and construct reserve demand functions. Illustrative examples show that the demand functions constructed have similarities with those implemented in some markets.
PET/CT scanners: a hardware approach to image fusion.
Townsend, David W; Beyer, Thomas; Blodgett, Todd M
2003-07-01
New technology that combines positron tomography with x-ray computed tomography (PET/CT) is available from all major vendors of PET imaging equipment: CTI, Siemens, GE, Philips. Although not all vendors have made the same design choices as those described in this review all have in common that their high performance design places a commercial CT scanner in tandem with a commercial PET scanner. The level of physical integration is actually less than that of the original prototype design where the CT and PET components were mounted on the same rotating support. There will undoubtedly be a demand for PET/CT technology with a greater level of integration, and at a reduced cost. This may be achieved through the design of a scanner specifically for combined anatomical and functional imaging, rather than a design combining separate CT and PET scanners, as in the current approaches. By avoiding the duplication of data acquisition and image reconstruction functions, for example, a more integrated design should also allow cost savings over current commercial PET/CT scanners. The goal is then to design and build a device specifically for imaging the function and anatomy of cancer in the most optimal and effective way, without conceptualizing it as combined PET and CT. The development of devices specifically for imaging a particular disease (eg, cancer) differs from the conventional approach of, for example, an all-purpose anatomical imaging device such as a CT scanner. This new concept targets more of a disease management approach rather than the usual division into the medical specialties of radiology (anatomical imaging) and nuclear medicine (functional imaging). Copyright 2003 Elsevier Inc. All rights reserved.
Classical statistical mechanics approach to multipartite entanglement
NASA Astrophysics Data System (ADS)
Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.
2010-06-01
We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over balanced bipartitions. We search for maximally multipartite entangled states, whose average purity is minimal, and recast this optimization problem into a problem of statistical mechanics, by introducing a cost function, a fictitious temperature and a partition function. By investigating the high-temperature expansion, we obtain the first three moments of the distribution. We find that the problem exhibits frustration.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.
Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T
2017-07-01
Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.
Methodological Approaches for Estimating the Benefits and Costs of Smart Grid Demonstration Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Russell
This report presents a comprehensive framework for estimating the benefits and costs of Smart Grid projects and a step-by-step approach for making these estimates. The framework identifies the basic categories of benefits, the beneficiaries of these benefits, and the Smart Grid functionalities that lead to different benefits and proposes ways to estimate these benefits, including their monetization. The report covers cost-effectiveness evaluation, uncertainty, and issues in estimating baseline conditions against which a project would be compared. The report also suggests metrics suitable for describing principal characteristics of a modern Smart Grid to which a project can contribute. This first sectionmore » of the report presents background information on the motivation for the report and its purpose. Section 2 introduces the methodological framework, focusing on the definition of benefits and a sequential, logical process for estimating them. Beginning with the Smart Grid technologies and functions of a project, it maps these functions to the benefits they produce. Section 3 provides a hypothetical example to illustrate the approach. Section 4 describes each of the 10 steps in the approach. Section 5 covers issues related to estimating benefits of the Smart Grid. Section 6 summarizes the next steps. The methods developed in this study will help improve future estimates - both retrospective and prospective - of the benefits of Smart Grid investments. These benefits, including those to consumers, society in general, and utilities, can then be weighed against the investments. Such methods would be useful in total resource cost tests and in societal versions of such tests. As such, the report will be of interest not only to electric utilities, but also to a broad constituency of stakeholders. Significant aspects of the methodology were used by the U.S. Department of Energy (DOE) to develop its methods for estimating the benefits and costs of its renewable and distributed systems integration demonstration projects as well as its Smart Grid Investment Grant projects and demonstration projects funded under the American Recovery and Reinvestment Act (ARRA). The goal of this report, which was cofunded by the Electric Power Research Institute (EPRI) and DOE, is to present a comprehensive set of methods for estimating the benefits and costs of Smart Grid projects. By publishing this report, EPRI seeks to contribute to the development of methods that will establish the benefits associated with investments in Smart Grid technologies. EPRI does not endorse the contents of this report or make any representations as to the accuracy and appropriateness of its contents. The purpose of this report is to present a methodological framework that will provide a standardized approach for estimating the benefits and costs of Smart Grid demonstration projects. The framework also has broader application to larger projects, such as those funded under the ARRA. Moreover, with additional development, it will provide the means for extrapolating the results of pilots and trials to at-scale investments in Smart Grid technologies. The framework was developed by a panel whose members provided a broad range of expertise.« less
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
High-Throughput Functional Validation of Progression Drivers in Lung Adenocarcinoma
2013-09-01
2) a novel molecular barcoding approach that facilitates cost- effective detection of driver events following in vitro and in vivo functional screens...aberration construction pipeline, which we named High-Throughput 3 Mutagenesis and Molecular Barcoding (HiTMMoB; Fig.1). We have therefore been able...lentiviral vector specially constructed for this project. This vector is compatible with our flexible molecular barcoding technology (Fig. 1), thus each
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. Copyright © 2015 Elsevier Inc. All rights reserved.
McBeath, Bowen; Briggs, Harold E; Aisenberg, Eugene
2010-10-01
Federal, state, and local policymakers and funders have increasingly organized human service delivery functions around the selection and implementation of empirically supported interventions (ESIs), under the expectation that service delivery through such intervention frameworks results in improvements in cost-effectiveness and system performance. This article examines the validity of four premises undergirding the ESI approach: ESIs are effective, relevant to common client problems and needs, culturally appropriate, and replicable and sustainable in community-based settings. In reviewing available literature, the authors found insufficient support for the uniform application of an ESI approach to social work practice in the human service sector, particularly as applied within agency contexts serving ethnic minority clients. The authors recommend that greater attention be devoted to the development and dissemination of social work interventions that respond to needs that are broadly understood and shared across diverse cultural groups, have proven clinical efficacy, and can be translated successfully for use across different agency and cultural environments. Such attention to the research and development function of the social work profession is increasingly necessary as policymakers and human service system architects require reduced costs and improved performance for programs serving historically oppressed client populations.
The Evolution of a More Rigorous Approach to Benefit Transfer: Benefit Function Transfer
NASA Astrophysics Data System (ADS)
Loomis, John B.
1992-03-01
The desire for economic values of recreation for unstudied recreation resources dates back to the water resource development benefit-cost analyses of the early 1960s. Rather than simply applying existing estimates of benefits per trip to the study site, a fairly rigorous approach was developed by a number of economists. This approach involves application of travel cost demand equations and contingent valuation benefit functions from existing sites to the new site. In this way the spatial market of the new site (i.e., its differing own price, substitute prices and population distribution) is accounted for in the new estimate of total recreation benefits. The assumptions of benefit transfer from recreation sites in one state to another state for the same recreation activity is empirically tested. The equality of demand coefficients for ocean sport salmon fishing in Oregon versus Washington and for freshwater steelhead fishing in Oregon versus Idaho is rejected. Thus transfer of either demand equations or average benefits per trip are likely to be in error. Using the Oregon steelhead equation, benefit transfers to rivers within the state are shown to be accurate to within 5-15%.
Construction of siRNA/miRNA expression vectors based on a one-step PCR process
Xu, Jun; Zeng, Jie Qiong; Wan, Gang; Hu, Gui Bin; Yan, Hong; Ma, Li Xin
2009-01-01
Background RNA interference (RNAi) has become a powerful means for silencing target gene expression in mammalian cells and is envisioned to be useful in therapeutic approaches to human disease. In recent years, high-throughput, genome-wide screening of siRNA/miRNA libraries has emerged as a desirable approach. Current methods for constructing siRNA/miRNA expression vectors require the synthesis of long oligonucleotides, which is costly and suffers from mutation problems. Results Here we report an ingenious method to solve traditional problems associated with construction of siRNA/miRNA expression vectors. We synthesized shorter primers (< 50 nucleotides) to generate a linear expression structure by PCR. The PCR products were directly transformed into chemically competent E. coli and converted to functional vectors in vivo via homologous recombination. The positive clones could be easily screened under UV light. Using this method we successfully constructed over 500 functional siRNA/miRNA expression vectors. Sequencing of the vectors confirmed a high accuracy rate. Conclusion This novel, convenient, low-cost and highly efficient approach may be useful for high-throughput assays of RNAi libraries. PMID:19490634
Real-time terminal area trajectory planning for runway independent aircraft
NASA Astrophysics Data System (ADS)
Xue, Min
The increasing demand for commercial air transportation results in delays due to traffic queues that form bottlenecks along final approach and departure corridors. In urban areas, it is often infeasible to build new runways, and regardless of automation upgrades traffic must remain separated to avoid the wakes of previous aircraft. Vertical or short takeoff and landing aircraft as Runway Independent Aircraft (RIA) can increase passenger throughput at major urban airports via the use of vertiports or stub runways. The concept of simultaneous non-interfering (SNI) operations has been proposed to reduce traffic delays by creating approach and departure corridors that do not intersect existing fixed-wing routes. However, SNI trajectories open new routes that may overfly noise-sensitive areas, and RIA may generate more noise than traditional jet aircraft, particularly on approach. In this dissertation, we develop efficient SNI noise abatement procedures applicable to RIA. First, we introduce a methodology based on modified approximated cell-decomposition and Dijkstra's search algorithm to optimize longitudinal plane (2-D) RIA trajectories over a cost function that minimizes noise, time, and fuel use. Then, we extend the trajectory optimization model to 3-D with a k-ary tree as the discrete search space. We incorporate geography information system (GIS) data, specifically population, into our objective function, and focus on a practical case study: the design of SNI RIA approach procedures to Baltimore-Washington International airport. Because solutions were represented as trim state sequences, we incorporated smooth transition between segments to enable more realistic cost estimates. Due to the significant computational complexity, we investigated alternative more efficient optimization techniques applicable to our nonlinear, non-convex, heavily constrained, and discontinuous objective function. Comparing genetic algorithm (GA) and adaptive simulated annealing (ASA) with our original Dijkstra's algorithm, ASA is identified as the most efficient algorithm for terminal area trajectory optimization. The effects of design parameter discretization are analyzed, with results indicating a SNI procedure with 3-4 segments effectively balances simplicity with cost minimization. Finally, pilot control commands were implemented and generated via optimization-base inverse simulation to validate execution of the optimal approach trajectories.
Hierarchical Control Using Networks Trained with Higher-Level Forward Models
Wayne, Greg; Abbott, L.F.
2015-01-01
We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706
Accurate modeling of defects in graphene transport calculations
NASA Astrophysics Data System (ADS)
Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian
2018-01-01
We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.
Zehnder, Pascal; Gill, Inderbir S
2011-09-01
To provide insight into the recently published cost comparisons in the context of open, laparoscopic, and robotic-assisted laparoscopic radical cystectomy and to demonstrate the complexity of such economic analyses. Most economic evaluations are from a hospital perspective and summarize short-term perioperative therapeutic costs. However, the contributing factors (e.g. study design, included variables, robotic amortization plan, supply contract, surgical volume, surgeons' experience, etc.) vary substantially between the institutions. In addition, a real cost-effective analysis considering cost per quality-adjusted life-year gained is not feasible because of the lack of long-term oncologic and functional outcome data with the robotic procedure. On the basis of a modeled cost analysis using results from published series, robotic-assisted cystectomy was - with few exceptions - found to be more expensive when compared with the open approach. Immediate costs are affected most by operative time, followed by length of hospital stay, robotic supply, case volume, robotic cost, and transfusion rate. Any complication substantially impacts overall costs. Economic cost evaluations are complex analyses influenced by numerous factors that hardly allow an interinstitutional comparison. Robotic-assisted cystectomy is constantly refined with many institutions being somewhere on their learning curve. Transparent reports of oncologic and functional outcome data from centers of expertise applying standardized methods will help to properly analyze the real long-term benefits of robotic surgery and successor technologies and prevent us from becoming slaves of successful marketing strategies.
Granovsky, Alexander A
2015-12-21
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
Systematic review of reusable versus disposable laparoscopic instruments: costs and safety.
Siu, Joey; Hill, Andrew G; MacCormick, Andrew D
2017-01-01
The quality of instruments and surgical expertise in minimally invasive surgery has developed markedly in the last two decades. Attention is now being turned to ways to allow surgeons to adopt more cost-effective and environmental-friendly approaches. This review explores current evidence on the cost and environmental impact of reusable versus single-use instruments. In addition, we aim to compare their quality, functionality and associated clinical outcomes. The Medline and EMBASE databases were searched for relevant literature from January 2000 to May 2015. Subject headings were Equipment Reuse/, Disposable Equipment/, Cholecystectomy/, Laparoscopic/, Laparoscopy/, Surgical Instruments/, Medical Waste Disposal/, Waste Management/, Medical Waste/, Environmental Sustainability/ and Sterilization/. There are few objective comparative analyses between single-use versus reusable instruments. Current evidence suggests that limiting use of disposal instruments to necessity may hold both economical and environmental advantages. Theoretical advantages of single-use instruments in quality, safety, sterility, ease of use and importantly patient outcomes have rarely been examined. Cost-saving methods, environmental-friendly methods, global operative costs, hidden costs, sterilization methods and quality assurance systems vary greatly between studies making it difficult to gain an overview of the comparison between single-use and reusable instruments. Further examination of cost comparisons between disposable and reusable instruments is necessary while externalized environmental costs, instrument function and safety are also important to consider in future studies. © 2016 Royal Australasian College of Surgeons.
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.
Wavefield reconstruction inversion with a multiplicative cost function
NASA Astrophysics Data System (ADS)
da Silva, Nuno V.; Yao, Gang
2018-01-01
We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.
O'Sullivan, Peter B; Caneiro, J P; O'Keeffe, Mary; Smith, Anne; Dankaerts, Wim; Fersum, Kjartan; O'Sullivan, Kieran
2018-05-01
Biomedical approaches for diagnosing and managing disabling low back pain (LBP) have failed to arrest the exponential increase in health care costs, with a concurrent increase in disability and chronicity. Health messages regarding the vulnerability of the spine and a failure to target the interplay among multiple factors that contribute to pain and disability may partly explain this situation. Although many approaches and subgrouping systems for disabling LBP have been proposed in an attempt to deal with this complexity, they have been criticized for being unidimensional and reductionist and for not improving outcomes. Cognitive functional therapy was developed as a flexible integrated behavioral approach for individualizing the management of disabling LBP. This approach has evolved from an integration of foundational behavioral psychology and neuroscience within physical therapist practice. It is underpinned by a multidimensional clinical reasoning framework in order to identify the modifiable and nonmodifiable factors associated with an individual's disabling LBP. This article illustrates the application of cognitive functional therapy to provide care that can be adapted to an individual with disabling LBP.
Using stochastic dynamic programming to support catchment-scale water resources management in China
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Pereira-Cardenal, Silvio Javier; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2013-04-01
A hydro-economic modelling approach is used to optimize reservoir management at river basin level. We demonstrate the potential of this integrated approach on the Ziya River basin, a complex basin on the North China Plain south-east of Beijing. The area is subject to severe water scarcity due to low and extremely seasonal precipitation, and the intense agricultural production is highly dependent on irrigation. Large reservoirs provide water storage for dry months while groundwater and the external South-to-North Water Transfer Project are alternative sources of water. An optimization model based on stochastic dynamic programming has been developed. The objective function is to minimize the total cost of supplying water to the users, while satisfying minimum ecosystem flow constraints. Each user group (agriculture, domestic and industry) is characterized by fixed demands, fixed water allocation costs for the different water sources (surface water, groundwater and external water) and fixed costs of water supply curtailment. The multiple reservoirs in the basin are aggregated into a single reservoir to reduce the dimensions of decisions. Water availability is estimated using a hydrological model. The hydrological model is based on the Budyko framework and is forced with 51 years of observed daily rainfall and temperature data. 23 years of observed discharge from an in-situ station located downstream a remote mountainous catchment is used for model calibration. Runoff serial correlation is described by a Markov chain that is used to generate monthly runoff scenarios to the reservoir. The optimal costs at a given reservoir state and stage were calculated as the minimum sum of immediate and future costs. Based on the total costs for all states and stages, water value tables were generated which contain the marginal value of stored water as a function of the month, the inflow state and the reservoir state. The water value tables are used to guide allocation decisions in simulation mode. The performance of the operation rules based on water value tables was evaluated. The approach was used to assess the performance of alternative development scenarios and infrastructure projects successfully in the case study region.
A high-throughput screening approach for the optoelectronic properties of conjugated polymers.
Wilbraham, Liam; Berardo, Enrico; Turcani, Lukas; Jelfs, Kim E; Zwijnenburg, Martijn A
2018-06-25
We propose a general high-throughput virtual screening approach for the optical and electronic properties of conjugated polymers. This approach makes use of the recently developed xTB family of low-computational-cost density functional tight-binding methods from Grimme and co-workers, calibrated here to (TD-)DFT data computed for a representative diverse set of (co-)polymers. Parameters drawn from the resulting calibration using a linear model can then be applied to the xTB derived results for new polymers, thus generating near DFT-quality data with orders of magnitude reduction in computational cost. As a result, after an initial computational investment for calibration, this approach can be used to quickly and accurately screen on the order of thousands of polymers for target applications. We also demonstrate that the (opto)electronic properties of the conjugated polymers show only a very minor variation when considering different conformers and that the results of high-throughput screening are therefore expected to be relatively insensitive with respect to the conformer search methodology applied.
Bottom-up production of meta-atoms for optical magnetism in visible and NIR light
NASA Astrophysics Data System (ADS)
Barois, Philippe; Ponsinet, Virginie; Baron, Alexandre; Richetti, Philippe
2018-02-01
Many unusual optical properties of metamaterials arise from the magnetic response of engineered structures of sub-wavelength size (meta-atoms) exposed to light. The top-down approach whereby engineered nanostructure of well-defined morphology are engraved on a surface proved to be successful for the generation of strong optical magnetism. It faces however the limitations of high cost and small active area in visible light where nanometre resolution is needed. The bottom-up approach whereby the fabrication metamaterials of large volume or large area results from the combination of nanochemitry and self-assembly techniques may constitute a cost-effective alternative. This approach nevertheless requires the large-scale production of functional building-blocks (meta-atoms) bearing a strong magnetic optical response. We propose in this paper a few tracks that lead to the large scale synthesis of magnetic metamaterials operating in visible or near IR light.
A new approach to implementing decentralized wastewater treatment concepts.
van Afferden, Manfred; Cardona, Jaime A; Lee, Mi-Yong; Subah, Ali; Müller, Roland A
2015-01-01
Planners and decision-makers in the wastewater sector are often confronted with the problem of identifying adequate development strategies and most suitable finance schemes for decentralized wastewater infrastructure. This paper research has focused on providing an approach in support of such decision-making. It is based on basic principles that stand for an integrated perspective towards sustainable wastewater management. We operationalize these principles by means of a geographic information system (GIS)-based approach 'Assessment of Local Lowest-Cost Wastewater Solutions'--ALLOWS. The main product of ALLOWS is the identification of cost-effective local wastewater management solutions for any given demographic and physical context. By using universally available input data the tool allows decision-makers to compare different wastewater solutions for any given wastewater situation. This paper introduces the ALLOWS-GIS tool. Its application and functionality are illustrated by assessing different wastewater solutions for two neighboring communities in rural Jordan.
NASA Astrophysics Data System (ADS)
Harkness, Linda L.; Sjoberg, Eric S.
1996-06-01
The Georgia Tech Research Institute, sponsored by the Warner Robins Air Logistics Center, has developed an approach for efficiently postulating and evaluating methods for extending the life of radars and other avionics systems. The technique identified specific assemblies for potential replacement and evaluates the system level impact, including performance, reliability and life-cycle cost of each action. The initial impetus for this research was the increasing obsolescence of integrated circuits contained in the AN/APG-63 system. The operational life of military electronics is typically in excess of twenty years, which encompasses several generations of IC technology. GTRI has developed a systems approach to inserting modern technology components into older systems based upon identification of those functions which limit the system's performance or reliability and which are cost drivers. The presentation will discuss the above methodology and a technique for evaluating and ranking the different potential system upgrade options.
Marsac, Meghan L.; Winston, Flaura K.; Hildenbrand, Aimee K.; Kohser, Kristen L.; March, Sonja; Kenardy, Justin; Kassam-Adams, Nancy
2015-01-01
Background Millions of children are affected by acute medical events annually, creating need for resources to promote recovery. While web-based interventions promise wide reach and low cost for users, development can be time- and cost-intensive. A systematic approach to intervention development can help to minimize costs and increase likelihood of effectiveness. Using a systematic approach, our team integrated evidence on the etiology of traumatic stress, an explicit program theory, and a user-centered design process to intervention development. Objective To describe evidence and the program theory model applied to the Coping Coach intervention and present pilot data evaluating intervention feasibility and acceptability. Method Informed by empirical evidence on traumatic stress prevention, an overarching program theory model was articulated to delineate pathways from a) specific intervention content to b) program targets and proximal outcomes to c) key longer-term health outcomes. Systematic user-testing with children ages 8–12 (N = 42) exposed to an acute medical event and their parents was conducted throughout intervention development. Results Functionality challenges in early prototypes necessitated revisions. Child engagement was positive throughout revisions to the Coping Coach intervention. Final pilot-testing demonstrated promising feasibility and high user-engagement and satisfaction. Conclusion Applying a systematic approach to the development of Coping Coach led to the creation of a functional intervention that is accepted by children and parents. Development of new e-health interventions may benefit from a similar approach. Future research should evaluate the efficacy of Coping Coach in achieving targeted outcomes of reduced trauma symptoms and improved health-related quality of life. PMID:25844276
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
Hydroeconomic modeling of sustainable groundwater management
NASA Astrophysics Data System (ADS)
MacEwan, Duncan; Cayar, Mesut; Taghavi, Ali; Mitchell, David; Hatchett, Steve; Howitt, Richard
2017-03-01
In 2014, California passed legislation requiring the sustainable management of critically overdrafted groundwater basins, located primarily in the Central Valley agricultural region. Hydroeconomic modeling of the agricultural economy, groundwater, and surface water systems is critically important to simulate potential transition paths to sustainable management of the basins. The requirement for sustainable groundwater use by 2040 is mandated for many overdrafted groundwater basins that are decoupled from environmental and river flow effects. We argue that, for such cases, a modeling approach that integrates a biophysical response function from a hydrologic model into an economic model of groundwater use is preferable to embedding an economic response function in a complex hydrologic model as is more commonly done. Using this preferred approach, we develop a dynamic hydroeconomic model for the Kings and Tulare Lake subbasins of California and evaluate three groundwater management institutions—open access, perfect foresight, and managed pumping. We quantify the costs and benefits of sustainable groundwater management, including energy pumping savings, drought reserve values, and avoided capital costs. Our analysis finds that, for basins that are severely depleted, losses in crop net revenue are offset by the benefits of energy savings, drought reserve value, and avoided capital costs. This finding provides an empirical counterexample to the Gisser and Sanchez Effect.
Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard
2015-05-01
Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
NASA Astrophysics Data System (ADS)
Brecher, Christian; Baum, Christoph; Bastuck, Thomas
2015-03-01
Economically advantageous microfabrication technologies for lab-on-a-chip diagnostic devices substituting commonly used glass etching or injection molding processes are one of the key enablers for the emerging market of microfluidic devices. On-site detection in fields of life sciences, point of care diagnostics and environmental analysis requires compact, disposable and highly functionalized systems. Roll-to-roll production as a high volume process has become the emerging fabrication technology for integrated, complex high technology products within recent years (e.g. fuel cells). Differently functionalized polymer films enable researchers to create a new generation of lab-on-a-chip devices by combining electronic, microfluidic and optical functions in multilayer architecture. For replication of microfluidic and optical functions via roll-to-roll production process competitive approaches are available. One of them is to imprint fluidic channels and optical structures of micro- or nanometer scale from embossing rollers into ultraviolet (UV) curable lacquers on polymer substrates. Depending on dimension, shape and quantity of those structures there are alternative manufacturing technologies for the embossing roller. Ultra-precise diamond turning, electroforming or casting polymer materials are used either for direct structuring or manufacturing of roller sleeves. Mastering methods are selected for application considering replication quality required and structure complexity. Criteria for the replication quality are surface roughness and contour accuracy. Structure complexity is evaluated by shapes producible (e.g. linear, circular) and aspect ratio. Costs for the mastering process and structure lifetime are major cost factors. The alternative replication approaches are introduced and analyzed corresponding to the criteria presented. Advantages and drawbacks of each technology are discussed and exemplary applications are presented.
Yeh, Wei-Chang
Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.
Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas
NASA Technical Reports Server (NTRS)
Young, D. T.
1993-01-01
The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.
ERIC Educational Resources Information Center
McBeath, Bowen; Briggs, Harold E.; Aisenberg, Eugene
2010-01-01
Federal, state, and local policymakers and funders have increasingly organized human service delivery functions around the selection and implementation of empirically supported interventions (ESIs), under the expectation that service delivery through such intervention frameworks results in improvements in cost-effectiveness and system performance.…
Design of Optimally Robust Control Systems.
1980-01-01
approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double
Managing configuration software of ground software applications with glueware
NASA Technical Reports Server (NTRS)
Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.
2003-01-01
This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
NASA Technical Reports Server (NTRS)
Tideman, T. N.
1972-01-01
An economic approach to design efficient transportation systems involves maximizing an objective function that reflects both goals and costs. A demand curve can be derived by finding the quantities of a good that solve the maximization problem as one varies the price of that commodity, holding income and the prices of all other goods constant. A supply curve is derived by applying the idea of profit maximization of firms. The production function determines the relationship between input and output.
Cost-Effectiveness Thresholds in Global Health: Taking a Multisectoral Perspective.
Remme, Michelle; Martinez-Alvarez, Melisa; Vassall, Anna
2017-04-01
Good health is a function of a range of biological, environmental, behavioral, and social factors. The consumption of quality health care services is therefore only a part of how good health is produced. Although few would argue with this, the economic framework used to allocate resources to optimize population health is applied in a way that constrains the analyst and the decision maker to health care services. This approach risks missing two critical issues: 1) multiple sectors contribute to health gain and 2) the goods and services produced by the health sector can have multiple benefits besides health. We illustrate how present cost-effectiveness thresholds could result in health losses, particularly when considering health-producing interventions in other sectors or public health interventions with multisectoral outcomes. We then propose a potentially more optimal second best approach, the so-called cofinancing approach, in which the health payer could redistribute part of its budget to other sectors, where specific nonhealth interventions achieved a health gain more efficiently than the health sector's marginal productivity (opportunity cost). Likewise, other sectors would determine how much to contribute toward such an intervention, given the current marginal productivity of their budgets. Further research is certainly required to test and validate different measurement approaches and to assess the efficiency gains from cofinancing after deducting the transaction costs that would come with such cross-sectoral coordination. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Quantifying the economic risks of climate change
NASA Astrophysics Data System (ADS)
Diaz, Delavane; Moore, Frances
2017-11-01
Understanding the value of reducing greenhouse-gas emissions matters for policy decisions and climate risk management, but quantification is challenging because of the complex interactions and uncertainties in the Earth and human systems, as well as normative ethical considerations. Current modelling approaches use damage functions to parameterize a simplified relationship between climate variables, such as temperature change, and economic losses. Here we review and synthesize the limitations of these damage functions and describe how incorporating impacts, adaptation and vulnerability research advances and empirical findings could substantially improve damage modelling and the robustness of social cost of carbon values produced. We discuss the opportunities and challenges associated with integrating these research advances into cost-benefit integrated assessment models, with guidance for future work.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Immune defense and host life history.
Zuk, Marlene; Stoehr, Andrew M
2002-10-01
Recent interest has focused on immune response in an evolutionary context, with particular attention to disease resistance as a life-history trait, subject to trade-offs against other traits such as reproductive effort. Immune defense has several characteristics that complicate this approach, however; for example, because of the risk of autoimmunity, optimal immune defense is not necessarily maximum immune defense. Two important types of cost associated with immunity in the context of life history are resource costs, those related to the allocation of essential but limited resources, such as energy or nutrients, and option costs, those paid not in the currency of resources but in functional or structural components of the organism. Resource and option costs are likely to apply to different aspects of resistance. Recent investigations into possible trade-offs between reproductive effort, particularly sexual displays, and immunity have suggested interesting functional links between the two. Although all organisms balance the costs of immune defense against the requirements of reproduction, this balance works out differently for males than it does for females, creating sex differences in immune response that in turn are related to ecological factors such as the mating system. We conclude that immune response is indeed costly and that future work would do well to include invertebrates, which have sometimes been neglected in studies of the ecology of immune defense.
NASA Astrophysics Data System (ADS)
Lipiński, Seweryn; Olkowski, Tomasz
2017-10-01
The estimate of the cost of electro-mechanical equipment for new small hydropower plants most often amounts to about 30-40% of the total budget. In case of modernization of existing installations, this estimation represents the main cost. This matter constitutes a research problem for at least few decades. Many models have been developed for that purpose. The aim of our work was to collect and analyse formulas that allow estimation of the cost of investment in electro-mechanical equipment for small hydropower plants. Over a dozen functions were analysed. To achieve the aim of our work, these functions were converted into the form allowing their comparison. Then the costs were simulated with respect to plants' powers and net heads; such approach is novel and allows deeper discussion of the problem, as well as drawing broader conclusions. The following conclusions can be drawn: significant differences in results obtained by using various formulas were observed; there is a need for a wide study based on national investments in small hydropower plants that would allow to develop equations based on local data; the obtained formulas would let to determinate the costs of modernization or a new construction of small hydropower plant more precisely; special attention should be payed to formulas considering turbine type.
Decision making under uncertainty: a quasimetric approach.
N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques
2013-01-01
We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.
Herbold, Craig W.; Pelikan, Claus; Kuzyk, Orest; Hausmann, Bela; Angel, Roey; Berry, David; Loy, Alexander
2015-01-01
High throughput sequencing of phylogenetic and functional gene amplicons provides tremendous insight into the structure and functional potential of complex microbial communities. Here, we introduce a highly adaptable and economical PCR approach to barcoding and pooling libraries of numerous target genes. In this approach, we replace gene- and sequencing platform-specific fusion primers with general, interchangeable barcoding primers, enabling nearly limitless customized barcode-primer combinations. Compared to barcoding with long fusion primers, our multiple-target gene approach is more economical because it overall requires lower number of primers and is based on short primers with generally lower synthesis and purification costs. To highlight our approach, we pooled over 900 different small-subunit rRNA and functional gene amplicon libraries obtained from various environmental or host-associated microbial community samples into a single, paired-end Illumina MiSeq run. Although the amplicon regions ranged in size from approximately 290 to 720 bp, we found no significant systematic sequencing bias related to amplicon length or gene target. Our results indicate that this flexible multiplexing approach produces large, diverse, and high quality sets of amplicon sequence data for modern studies in microbial ecology. PMID:26236305
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
A toxicity cost function approach to optimal CPA equilibration in tissues.
Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali
2018-02-01
There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6 cm 2 /s, 1.2 × 10 -6 cm 2 /s and 1.3 × 10 -6 cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Candidate Mission from Planet Earth control and data delivery system architecture
NASA Technical Reports Server (NTRS)
Shapiro, Phillip; Weinstein, Frank C.; Hei, Donald J., Jr.; Todd, Jacqueline
1992-01-01
Using a structured, experienced-based approach, Goddard Space Flight Center (GSFC) has assessed the generic functional requirements for a lunar mission control and data delivery (CDD) system. This analysis was based on lunar mission requirements outlined in GSFC-developed user traffic models. The CDD system will facilitate data transportation among user elements, element operations, and user teams by providing functions such as data management, fault isolation, fault correction, and link acquisition. The CDD system for the lunar missions must not only satisfy lunar requirements but also facilitate and provide early development of data system technologies for Mars. Reuse and evolution of existing data systems can help to maximize system reliability and minimize cost. This paper presents a set of existing and currently planned NASA data systems that provide the basic functionality. Reuse of such systems can have an impact on mission design and significantly reduce CDD and other system development costs.
Scalable maskless patterning of nanostructures using high-speed scanning probe arrays
NASA Astrophysics Data System (ADS)
Chen, Chen; Akella, Meghana; Du, Zhidong; Pan, Liang
2017-08-01
Nanoscale patterning is the key process to manufacture important products such as semiconductor microprocessors and data storage devices. Many studies have shown that it has the potential to revolutionize the functions of a broad range of products for a wide variety of applications in energy, healthcare, civil, defense and security. However, tools for mass production of these devices usually cost tens of million dollars each and are only affordable to the established semiconductor industry. A new method, nominally known as "pattern-on-the- y", that involves scanning an array of optical or electrical probes at high speed to form nanostructures and offers a new low-cost approach for nanoscale additive patterning. In this paper, we report some progress on using this method to pattern self-assembled monolayers (SAMs) on silicon substrate. We also functionalize the substrate with gold nanoparticle based on the SAM to show the feasibility of preparing amphiphilic and multi-functional surfaces.
The Effect of Publicized Quality Information on Home Health Agency Choice
Jung, Jeah Kyoungrae; Wu, Bingxiao; Kim, Hyunjee; Polsky, Daniel
2016-01-01
We examine consumers’ use of publicized quality information in Medicare home health care markets, where consumer cost sharing and travel costs are absent. We report two findings. First, agencies with high quality scores are more likely to be preferred by consumers after the introduction of a public reporting program than before. Second, consumers’ use of publicized quality information differs by patient group. Community-based patients have slightly larger responses to public reporting than hospital-discharged patients. Patients with functional limitations at the start of their care, at least among hospital-discharged patients, have a larger response to the reported functional outcome measure than those without functional limitations. In all cases of significant marginal effects, magnitudes are small. We conclude that the current public reporting approach is unlikely to have critical impacts on home health agency choice. Identifying and releasing quality information that is meaningful to consumers may help increase consumers’ use of public reports. PMID:26719047
Tight-binding calculation of single-band and generalized Wannier functions of graphene
NASA Astrophysics Data System (ADS)
Ribeiro, Allan Victor; Bruno-Alfonso, Alexys
Recent work has shown that a tight-binding approach associated with Wannier functions (WFs) provides an intuitive physical image of the electronic structure of graphene. Regarding the case of graphene, Marzari et al. displayed the calculated WFs and presented a comparison between the Wannier-interpolated bands and the bands generated by using the density-functional code. Jung and MacDonald provided a tight-binding model for the π-bands of graphene that involves maximally localized Wannier functions (MLWFs). The mixing of the bands yields better localized WFs. In the present work, the MLWFs of graphene are calculated by combining the Quantum-ESPRESSO code and tight-binding approach. The MLWFs of graphene are calculated from the Bloch functions obtained through a tight binding approach that includes interactions and overlapping obtained by partially fitting the DFT bands. The phase of the Bloch functions of each band is appropriately chosen to produce MLWFs. The same thing applies to the coefficients of their linear combination in the generalized case. The method allows for an intuitive understanding of the maximally localized WFs of graphene and shows excellent agreement with the literature. Moreover, it provides accurate results at reduced computational cost.
Design for Reliability and Safety Approach for the NASA New Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal, M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.
NASA Astrophysics Data System (ADS)
Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh
2017-08-01
In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.
Scalable Low-Cost Fabrication of Disposable Paper Sensors for DNA Detection
2015-01-01
Controlled integration of features that enhance the analytical performance of a sensor chip is a challenging task in the development of paper sensors. A critical issue in the fabrication of low-cost biosensor chips is the activation of the device surface in a reliable and controllable manner compatible with large-scale production. Here, we report stable, well-adherent, and repeatable site-selective deposition of bioreactive amine functionalities and biorepellant polyethylene glycol-like (PEG) functionalities on paper sensors by aerosol-assisted, atmospheric-pressure, plasma-enhanced chemical vapor deposition. This approach requires only 20 s of deposition time, compared to previous reports on cellulose functionalization, which takes hours. A detailed analysis of the near-edge X-ray absorption fine structure (NEXAFS) and its sensitivity to the local electronic structure of the carbon and nitrogen functionalities. σ*, π*, and Rydberg transitions in C and N K-edges are presented. Application of the plasma-processed paper sensors in DNA detection is also demonstrated. PMID:25423585
Scalable Low-Cost Fabrication of Disposable Paper Sensors for DNA Detection
Gandhiraman, Ram P.; Nordlund, Dennis; Jayan, Vivek; ...
2014-11-25
Controlled integration of features that enhance the analytical performance of a sensor chip is a challenging task in the development of paper sensors. A critical issue in the fabrication of low-cost biosensor chips is the activation of the device surface in a reliable and controllable manner compatible with large-scale production. Here, we report stable, well-adherent, and repeatable site-selective deposition of bioreactive amine functionalities and biorepellant polyethylene glycol-like (PEG) functionalities on paper sensors by aerosol-assisted, atmospheric-pressure, plasma-enhanced chemical vapor deposition. This approach requires only 20 s of deposition time, compared to previous reports on cellulose functionalization, which takes hours. We presentmore » a detailed analysis of the near-edge X-ray absorption fine structure (NEXAFS) and its sensitivity to the local electronic structure of the carbon and nitrogen functionalities. σ*, π*, and Rydberg transitions in C and N K-edges. Lastly, application of the plasma-processed paper sensors in DNA detection is also demonstrated.« less
Development of SiC Large Tapered Crystal Growth
NASA Technical Reports Server (NTRS)
Neudeck, Phil
2010-01-01
Majority of very large potential benefits of wide band gap semiconductor power electronics have NOT been realized due in large part to high cost and high defect density of commercial wafers. Despite 20 years of development, present SiC wafer growth approach is yet to deliver majority of SiC's inherent performance and cost benefits to power systems. Commercial SiC power devices are significantly de-rated in order to function reliably due to the adverse effects of SiC crystal dislocation defects (thousands per sq cm) in the SiC wafer.
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
The results of this effort are presented in a manner for use by both the AIDS 3 Operational and Economic Feasibility subtasks as well as the Development of Alternative subtask. The approach taken was to identify the major functions that appear in AIDS 3 and then to determine which technologies would be needed for support. The technologies were then examined from the point of view of reliability, throughput, security, availability, cost and possible future trends. Whenever possible graphs are given to indicate projected costs of rapidly changing technologies.
Performance simulation for the design of solar heating and cooling systems
NASA Technical Reports Server (NTRS)
Mccormick, P. O.
1975-01-01
Suitable approaches for evaluating the performance and the cost of a solar heating and cooling system are considered, taking into account the value of a computer simulation concerning the entire system in connection with the large number of parameters involved. Operational relations concerning the collector efficiency in the case of a new improved collector and a reference collector are presented in a graph. Total costs for solar and conventional heating, ventilation, and air conditioning systems as a function of time are shown in another graph.
Monolithic Microwave Integrated Circuits Based on GaAs Mesfet Technology
NASA Astrophysics Data System (ADS)
Bahl, Inder J.
Advanced military microwave systems are demanding increased integration, reliability, radiation hardness, compact size and lower cost when produced in large volume, whereas the microwave commercial market, including wireless communications, mandates low cost circuits. Monolithic Microwave Integrated Circuit (MMIC) technology provides an economically viable approach to meeting these needs. In this paper the design considerations for several types of MMICs and their performance status are presented. Multifunction integrated circuits that advance the MMIC technology are described, including integrated microwave/digital functions and a highly integrated transceiver at C-band.
Systems and technologies for high-speed inter-office/datacenter interface
NASA Astrophysics Data System (ADS)
Sone, Y.; Nishizawa, H.; Yamamoto, S.; Fukutoku, M.; Yoshimatsu, T.
2017-01-01
Emerging requirements for inter-office/inter-datacenter short reach links for data center interconnects (DCI) and metro transport networks have led to various inter-office and inter-datacenter optical interface technologies. These technologies are bringing significant changes to systems and network architectures. In this paper, we present a system and ZR optical interface technologies for DCI and metro transport networks, then introduce the latest challenges facing the system framework. There are two trends in reach extension; one is to use Ethernet and the other is to use digital coherent technologies. The first approach achieves reach extension while using as many existing Ethernet components as possible. It offers low costs as reuses the cost-effective components created for the large Ethernet market. The second approach adopts low-cost and low power coherent DSPs that implement the minimal set long haul transmission functions. This paper introduces an architecture that integrates both trends. The architecture satisfies both datacom and telecom needs with a common control and management interface and automated configuration.
NASA Astrophysics Data System (ADS)
Way, Yusoff
2018-01-01
The main aim of this research is to develop a new prototype and to conduct cost analysis of the existing roller clamp which is one of parts attached to Intravenous (I.V) Tubing used in Intravenous therapy medical device. Before proceed with the process to manufacture the final product using Fused Deposition Modeling (FDM) Technology, the data collected from survey were analyzed using Product Design Specifications approach. Selected concept has been proven to have better quality, functions and criteria compared to the existing roller clamp and the cost analysis of fabricating the roller clamp prototype was calculated.
Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel
2017-01-01
System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.
Using Wannier functions to improve solid band gap predictions in density functional theory
Ma, Jie; Wang, Lin-Wang
2016-04-26
Enforcing a straight-line condition of the total energy upon removal/addition of fractional electrons on eigen states has been successfully applied to atoms and molecules for calculating ionization potentials and electron affinities, but fails for solids due to the extended nature of the eigen orbitals. Here we have extended the straight-line condition to the removal/addition of fractional electrons on Wannier functions constructed within the occupied/unoccupied subspaces. It removes the self-interaction energies of those Wannier functions, and yields accurate band gaps for solids compared to experiments. It does not have any adjustable parameters and the computational cost is at the DFT level.more » This method can also work for molecules, providing eigen energies in good agreement with experimental ionization potentials and electron affinities. Our approach can be viewed as an alternative approach of the standard LDA+U procedure.« less
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
A soft computing-based approach to optimise queuing-inventory control problem
NASA Astrophysics Data System (ADS)
Alaghebandha, Mohammad; Hajipour, Vahid
2015-04-01
In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.
Assessing the costs of municipal solid waste treatment technologies in developing Asian countries.
Aleluia, João; Ferrão, Paulo
2017-11-01
The management of municipal solid waste (MSW) is one of the main costs incurred by local authorities in developing countries. According to some estimates, these costs can account for up to 50% of city government budgets. It is therefore of importance that policymakers, urban planners and practitioners have an adequate understanding of what these costs consist of, from collection to final waste disposal. This article focuses on a specific stage of the MSW value chain, the treatment of waste, and it aims to identify cost patterns associated with the implementation and operation of waste treatment approaches in developing Asian countries. An analysis of the capital (CAPEX) and operational expenditures (OPEX) of a number of facilities located in countries of the region was conducted based on a database gathering nearly 100 projects and which served as basis for assessing four technology categories: composting, anaerobic digestion (AD), thermal treatment, and the production of refuse-derived fuel (RDF). Among these, it was found that the least costly to invest, asa function of the capacity to process waste, are composting facilities, with an average CAPEX per ton of 21,493 USD 2015 /ton. Conversely, at the upper end featured incineration plants, with an average CAPEX of 81,880 USD 2015 /ton, with this treatment approach ranking by and large as the most capital intensive of the four categories assessed. OPEX figures of the plants, normalized and analyzed in the form of OPEX/ton, were also found to be higher for incineration than for biological treatment methods, although on this component differences amongst the technology groups were less pronounced than those observed for CAPEX. While the results indicated the existence of distinct cost implications for available treatment approaches in the developing Asian context, the analysis also underscored the importance of understanding the local context asa means to properly identify the cost structure of each specific plant. Moreover, even though CAPEX and OPEX figures are important elements to assess the costs of a waste treatment system, these should not be considered on a standalone basis for decision making purposes. In complement to this internal cost dimension, the broader impacts - to the economy, society and the environment - resulting from the adoption of a certain treatment approach should be properly understood and, ideally, measured and expressed in monetary terms. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data
Cooksey, Roxanne; Husain, Muhammad J.; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A.; Atkinson, Mark D.; Phillips, Ceri J.; Siebert, Stefan
2015-01-01
Background Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Methods Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. Results The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. Conclusion The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer’s time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions. PMID:26185984
The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data.
Cooksey, Roxanne; Husain, Muhammad J; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A; Atkinson, Mark D; Phillips, Ceri J; Siebert, Stefan
2015-01-01
Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer's time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions.
Qualification and Reliability for MEMS and IC Packages
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2004-01-01
Advanced IC electronic packages are moving toward miniaturization from two key different approaches, front and back-end processes, each with their own challenges. Successful use of more of the back-end process front-end, e.g. microelectromechanical systems (MEMS) Wafer Level Package (WLP), enable reducing size and cost. Use of direct flip chip die is the most efficient approach if and when the issues of know good die and board/assembly are resolved. Wafer level package solve the issue of known good die by enabling package test, but it has its own limitation, e.g., the I/O limitation, additional cost, and reliability. From the back-end approach, system-in-a-package (SIAP/SIP) development is a response to an increasing demand for package and die integration of different functions into one unit to reduce size and cost and improve functionality. MEMS add another challenging dimension to electronic packaging since they include moving mechanical elements. Conventional qualification and reliability need to be modified and expanded in most cases in order to detect new unknown failures. This paper will review four standards that already released or being developed that specifically address the issues on qualification and reliability of assembled packages. Exposures to thermal cycles, monotonic bend test, mechanical shock and drop are covered in these specifications. Finally, mechanical and thermal cycle qualification data generated for MEMS accelerometer will be presented. The MEMS was an element of an inertial measurement unit (IMU) qualified for NASA Mars Exploration Rovers (MERs), Spirit and Opportunity that successfully is currently roaring the Martian surface
Li, Mengdi; Fan, Juntao; Zhang, Yuan; Guo, Fen; Liu, Lusan; Xia, Rui; Xu, Zongxue; Wu, Fengchang
2018-05-15
Aiming to protect freshwater ecosystems, river ecological restoration has been brought into the research spotlight. However, it is challenging for decision makers to set appropriate objectives and select a combination of rehabilitation acts from numerous possible solutions to meet ecological, economic, and social demands. In this study, we developed a systematic approach to help make an optimal strategy for watershed restoration, which incorporated ecological security assessment and multi-objectives optimization (MOO) into the planning process to enhance restoration efficiency and effectiveness. The river ecological security status was evaluated by using a pressure-state-function-response (PSFR) assessment framework, and MOO was achieved by searching for the Pareto optimal solutions via Non-dominated Sorting Genetic Algorithm II (NSGA-II) to balance tradeoffs between different objectives. Further, we clustered the searched solutions into three types in terms of different optimized objective function values in order to provide insightful information for decision makers. The proposed method was applied in an example rehabilitation project in the Taizi River Basin in northern China. The MOO result in the Taizi River presented a set of Pareto optimal solutions that were classified into three types: I - high ecological improvement, high cost and high benefits solution; II - medial ecological improvement, medial cost and medial economic benefits solution; III - low ecological improvement, low cost and low economic benefits solution. The proposed systematic approach in our study can enhance the effectiveness of riverine ecological restoration project and could provide valuable reference for other ecological restoration planning. Copyright © 2018 Elsevier B.V. All rights reserved.
Politics, policy and payment--facilitators or barriers to person-centred rehabilitation?
Turner-Stokes, Lynne
This paper explores the tensions between politics and payment in providing affordable services that satisfy the public demand for patient-centred care. The two main approaches taken by the UK Government to curtail the spiralling costs of healthcare have been to focus development in priority areas and to cap spending through the introduction of a fixed-tariff episode-based funding system. The National Service Framework for Long Term Neurological Conditions embraces many laudable principles of person-centred management, but the 'one-size-fits all' approach to reimbursement potentially cuts right across these. A series of tools have been developed to determine complexity of rehabilitation needs that will support the development of banded tariffs. A practical approach is also offered to demonstrate the cost-efficiency of rehabilitation services for people with complex needs, and help to ensure that they are not excluded from treatment because of their higher treatment costs. Whilst responding to public demand for person-centred care, we must recognize the current financial pressure on healthcare systems. Clinicians will have greater credibility if they routinely collect and share outcomes that demonstrate the economic benefits of intervention, as well the impact on health, function and quality of life.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
The Functional Breakdown Structure (FBS) and Its Relationship to Life Cycle Cost
NASA Technical Reports Server (NTRS)
DeHoff, Bryan; Levack, Danie J. H.; Rhodes, Russell E.
2009-01-01
The Functional Breakdown Structure (FBS) is a structured, modular breakdown of every function that must be addressed to perform a generic mission. It is also usable for any subset of the mission. Unlike a Work Breakdown Structure (WBS), the FBS is a function-oriented tree, not a product-oriented tree. The FBS details not products, but operations or activities that should be performed. The FBS is not tied to any particular architectural implementation because it is a listing of the needed functions, not the elements, of the architecture. The FBS for Space Transportation Systems provides a universal hierarchy of required functions, which include ground and space operations as well as infrastructure - it provides total visibility of the entire mission. By approaching the systems engineering problem from the functional view, instead of the element or hardware view, the SPST has created an exhaustive list of potential requirements which the architecture designers can use to evaluate the completeness of their designs. This is a new approach that will provide full accountability of all functions required to perform the planned mission. It serves as a giant check list to be sure that no functions are omitted, especially in the early architectural design phase. A significant characteristic of a FBS is that if architecture options are compared using this approach, then any missing or redundant elements of each option will be ' identified. Consequently, valid Life Cycle Costs (LCC) comparisons can be made. For example, one architecture option might not need a particular function while another option does. One option may have individual elements to perform each of three functions while another option needs only one element to perform the three functions. Once an architecture has been selected, the FBS will serve as a guide in development of the work breakdown structure, provide visibility of those technologies that need to be further developed to perform required functions, and help identify the personnel skills required to develop and operate the architecture. It also wifi allow the systems engineering activities to totally integrate each discipline to the maximum extent possible and optimize at the total system level, thus avoiding optimizing at the element level (stove-piping). In addition, it furnishes a framework that wifi help prevent over or under specifying requirements because all functions are identified and all elements are aligned to functions.
ERIC Educational Resources Information Center
Beath, Cynthia Mathis; Straub, Detmar W.
1991-01-01
Explores where the responsibility for information resources management (IRM) can lie, identifying entities which might carry IRM tasks: (1) individuals; (2) departments; (3) institutions; and (4) markets. It is argued that the IRM function should be located at the department level, and that associated departmental costs may be overshadowed by the…
Exploring the Psycho-Social Therapies Through the Personalities of Effective Therapists.
ERIC Educational Resources Information Center
Dent, James K.; Furse, George A.
Several specific research approaches are compared with regard to cost-effectiveness, types of disorders to which they best respond, general strategies, and therapist personality. Replicated findings include: (1) support for both the functional reversal and semantic reversal of the "A-B Scale;" (2) characterization of therapists who are effective…
Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach
ERIC Educational Resources Information Center
Stevenson, Glenn A.
2012-01-01
For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…
ERIC Educational Resources Information Center
Olsen, Marvin E.; Merwin, Donna J.
Broadly conceived, social impacts refer to all changes in the structure and functioning of patterned social ordering that occur in conjunction with an environmental, technological, or social innovation or alteration. Departing from the usual cost-benefit analysis approach, a new methodology proposes conducting social impact assessment grounded in…
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Superpixel Cut for Figure-Ground Image Segmentation
NASA Astrophysics Data System (ADS)
Yang, Michael Ying; Rosenhahn, Bodo
2016-06-01
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
Cost analysis in support of minimum energy standards for clothes washers and dryers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-02-02
The results of the cost analysis of energy conservation design options for laundry products are presented. The analysis was conducted using two approaches. The first, is directed toward the development of industrial engineering cost estimates of each energy conservation option. This approach results in the estimation of manufacturers costs. The second approach is directed toward determining the market price differential of energy conservation features. The results of this approach are shown. The market cost represents the cost to the consumer. It is the final cost, and therefore includes distribution costs as well as manufacturing costs.
Plug-and-play design approach to smart harness for modular small satellites
NASA Astrophysics Data System (ADS)
Mughal, M. Rizwan; Ali, Anwar; Reyneri, Leonardo M.
2014-02-01
A typical satellite involves many different components that vary in bandwidth demand. Sensors that require a very low data rate may reside on a simple two- or three-wire interface such as I2C, SPI, etc. Complex sensors that require high data rate and bandwidth may reside on an optical interface. The AraMiS architecture is an enhanced capability architecture with different satellite configurations. Although keeping the low-cost and COTS approach of CubeSats, it extends the modularity concept as it also targets different satellite shapes and sizes. But modularity moves beyond the mechanical structure: the tiles also have thermo-mechanical, harness and signal-processing functionalities. Further modularizing the system, every tile can also host a variable number of small sensors, actuators or payloads, connected using a plug-and-play approach. Every subsystem is housed in a small daughter board and is supplied, by the main tile, with power and data distribution functions, power and data harness, mechanical support and is attached and interconnected with space-grade spring-loaded connectors. The tile software is also modular and allows a quick adaptation to specific subsystems. The basic software for the CPU is properly hardened to guarantee high level of radiation tolerance at very low cost.
Combined EDL-Mobility Planning for Planetary Missions
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Balaram, Bob
2011-01-01
This paper presents an analysis framework for planetary missions that have coupled mobility and EDL (Entry-Descent-Landing) systems. Traditional systems engineering approaches to mobility missions such as MERs (Mars Exploration Rovers) and MSL (Mars Science Laboratory) independently study the EDL system and the mobility system, and does not perform explicit trade-off between them or risk minimization of the overall system. A major challenge is that EDL operation is inherently uncertain and its analysis results such as landing footprint are described using PDF (Probability Density Function). The proposed approach first builds a mobility cost-to-go map that encodes the driving cost of any point on the map to a science target location. The cost could include variety of metrics such as traverse distance, time, wheel rotation on soft soil, and closeness to hazards. It then convolves the mobility cost-to-go map with the landing PDF given by the EDL system, which provides a histogram of driving cost, which can be used to evaluate the overall risk of the mission. By capturing the coupling between EDL and mobility explicitly, this analysis framework enables quantitative tradeoff between EDL and mobility system performance, as well as the characterization of risks in a statistical way. The simulation results are presented with a realistic Mars terrain data
Toxicity Minimized Cryoprotectant Addition and Removal Procedures for Adherent Endothelial Cells
Davidson, Allyson Fry; Glasscock, Cameron; McClanahan, Danielle R.; Benson, James D.; Higgins, Adam Z.
2015-01-01
Ice-free cryopreservation, known as vitrification, is an appealing approach for banking of adherent cells and tissues because it prevents dissociation and morphological damage that may result from ice crystal formation. However, current vitrification methods are often limited by the cytotoxicity of the concentrated cryoprotective agent (CPA) solutions that are required to suppress ice formation. Recently, we described a mathematical strategy for identifying minimally toxic CPA equilibration procedures based on the minimization of a toxicity cost function. Here we provide direct experimental support for the feasibility of these methods when applied to adherent endothelial cells. We first developed a concentration- and temperature-dependent toxicity cost function by exposing the cells to a range of glycerol concentrations at 21°C and 37°C, and fitting the resulting viability data to a first order cell death model. This cost function was then numerically minimized in our state constrained optimization routine to determine addition and removal procedures for 17 molal (mol/kg water) glycerol solutions. Using these predicted optimal procedures, we obtained 81% recovery after exposure to vitrification solutions, as well as successful vitrification with the relatively slow cooling and warming rates of 50°C/min and 130°C/min. In comparison, conventional multistep CPA equilibration procedures resulted in much lower cell yields of about 10%. Our results demonstrate the potential for rational design of minimally toxic vitrification procedures and pave the way for extension of our optimization approach to other adherent cell types as well as more complex systems such as tissues and organs. PMID:26605546
Estimating productivity costs using the friction cost approach in practice: a systematic review.
Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna
2016-01-01
The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Bradley, Steven M; Strauss, Craig E; Ho, P Michael
2017-08-01
Healthcare value, defined as health outcomes achieved relative to the costs of care, has been proposed as a unifying approach to measure improvements in the quality and affordability of healthcare. Although value is of increasing interest to payers, many providers remain unfamiliar with how value differs from other approaches to the comparison of cost and outcomes (ie, cost-effectiveness analysis). While cost-effectiveness studies can be used by policy makers and payers to inform decisions about coverage and reimbursement for new therapies, the assessment of healthcare can guide improvements in the delivery of healthcare to achieve better outcomes at lower cost. Comparison on value allows for the identification of healthcare delivery organisations or care delivery settings where patient outcomes have been optimised at a lower cost. Gaps remain in the measurement of healthcare value, particularly as it relates to patient-reported health status (symptoms, functional status and health-related quality of life). The use of technology platforms that capture health status measures with minimal disruption to clinical workflow (ie, web portals, automated telephonic systems and tablets to facilitate capture outside of in-person clinical interaction) is facilitating use of health status measures to improve clinical care and optimise patient outcomes. Furthermore, the use of a value framework has catalysed quality improvement efforts and research to seek better patient outcomes at lower cost. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Xu, Enhua; Li, Shuhua
2015-03-07
An externally corrected CCSDt (coupled cluster with singles, doubles, and active triples) approach employing four- and five-body clusters from the complete active space self-consistent field (CASSCF) wave function (denoted as ecCCSDt-CASSCF) is presented. The quadruple and quintuple excitation amplitudes within the active space are extracted from the CASSCF wave function and then fed into the CCSDt-like equations, which can be solved in an iterative way as the standard CCSDt equations. With a size-extensive CASSCF reference function, the ecCCSDt-CASSCF method is size-extensive. When the CASSCF wave function is readily available, the computational cost of the ecCCSDt-CASSCF method scales as the popular CCSD method (if the number of active orbitals is small compared to the total number of orbitals). The ecCCSDt-CASSCF approach has been applied to investigate the potential energy surface for the simultaneous dissociation of two O-H bonds in H2O, the equilibrium distances and spectroscopic constants of 4 diatomic molecules (F2(+), O2(+), Be2, and NiC), and the reaction barriers for the automerization reaction of cyclobutadiene and the Cl + O3 → ClO + O2 reaction. In most cases, the ecCCSDt-CASSCF approach can provide better results than the CASPT2 (second order perturbation theory with a CASSCF reference function) and CCSDT methods.
Sanclemente-Ansó, Carmen; Bosch, Xavier; Salazar, Albert; Moreno, Ramón; Capdevila, Cristina; Rosón, Beatriz; Corbella, Xavier
2016-05-01
Quick diagnosis units (QDUs) are a promising alternative to conventional hospitalization for the diagnosis of suspected serious diseases, most commonly cancer and severe anemia. Although QDUs are as effective as hospitalization in reaching a timely diagnosis, a full economic evaluation comparing both approaches has not been reported. To evaluate the costs of QDU vs. conventional hospitalization for the diagnosis of cancer and anemia using a cost-minimization analysis on the proven assumption that health outcomes of both approaches were equivalent. Patients referred to the QDU of Bellvitge University Hospital of Barcelona over 51 months with a final diagnosis of severe anemia (unrelated to malignancy), lymphoma, and lung cancer were compared with patients hospitalized for workup with the same diagnoses. The total cost per patient until diagnosis was analyzed. Direct and non-direct costs of QDU and hospitalization were compared. Time to diagnosis in QDU patients (n=195) and length-of-stay in hospitalized patients (n=237) were equivalent. There were considerable costs savings from hospitalization. Highest savings for the three groups were related to fixed direct costs of hospital stays (66% of total savings). Savings related to fixed non-direct costs of structural and general functioning were 33% of total savings. Savings related to variable direct costs of investigations were 1% of total savings. Overall savings from hospitalization of all patients were €867,719.31. QDUs appear to be a cost-effective resource for avoiding unnecessary hospitalization in patients with anemia and cancer. Internists, hospital executives, and healthcare authorities should consider establishing this model elsewhere. Copyright © 2015. Published by Elsevier B.V.
Liquid on Paper: Rapid Prototyping of Soft Functional Components for Paper Electronics.
Han, Yu Long; Liu, Hao; Ouyang, Cheng; Lu, Tian Jian; Xu, Feng
2015-07-01
This paper describes a novel approach to fabricate paper-based electric circuits consisting of a paper matrix embedded with three-dimensional (3D) microchannels and liquid metal. Leveraging the high electric conductivity and good flowability of liquid metal, and metallophobic property of paper, it is possible to keep electric and mechanical functionality of the electric circuit even after a thousand cycles of deformation. Embedding liquid metal into paper matrix is a promising method to rapidly fabricate low-cost, disposable, and soft electric circuits for electronics. As a demonstration, we designed a programmable displacement transducer and applied it as variable resistors and pressure sensors. The unique metallophobic property, combined with softness, low cost and light weight, makes paper an attractive alternative to other materials in which liquid metal are currently embedded.
An approach to rescheduling activities based on determination of priority and disruptivity
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.; Johnston, Mark D.
1990-01-01
A constraint-based scheduling system called SPIKE is being used to create long term schedules for the Hubble Space Telescope. Feedback for the spacecraft or from other ground support systems may invalidate some scheduling decisions and those activities concerned must be reconsidered. A function rescheduling priority is defined which for a given activity performs a heuristic analysis and produces a relative numerical value which is used to rank all such entities in the order that they should be rescheduled. A function disruptivity is also defined that is used to place a relative numeric value on how much a pre-existing schedule would be changed in order to reschedule an activity. Using these functions, two algorithms (a stochastic neural network approach and an exhaustive search approach) are proposed to find the best place to reschedule an activity. Prototypes were implemented and preliminary testing reveals that the exhaustive technique produces only marginally better results at much greater computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witt, Adam M; Smith, Brennan T
Small hydropower plants supply reliable renewable energy to the grid, though few new plants have been developed in the Unites States over the past few decades due to complex environmental challenges and poor project economics. This paper describes the current landscape of small hydropower development, and introduces a new approach to facility design that co-optimizes the extraction of hydroelectric power from a stream with other important environmental functions such as fish, sediment, and recreational passage. The approach considers hydropower facilities as an integrated system of standardized interlocking modules, designed to sustain stream functions, generate power, and interface with the streambed.more » It is hypothesized that this modular eco-design approach, when guided by input from the broader small hydropower stakeholder community, can lead to cost savings across the facility, reduced licensing and approval timelines, and ultimately, to enhanced resiliency through improved environmental performance over the lifetime of the project.« less
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Laboratory automation: trajectory, technology, and tactics.
Markin, R S; Whalen, S A
2000-05-01
Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a modular approach, from a hardware-driven system to process control, from a one-of-a-kind novelty toward a standardized product, and from an in vitro diagnostics novelty to a marketing tool. Multiple vendors are present in the marketplace, many of whom are in vitro diagnostics manufacturers providing an automation solution coupled with their instruments, whereas others are focused automation companies. Automation technology continues to advance, acceptance continues to climb, and payback and cost justification methods are developing.
NASA Astrophysics Data System (ADS)
Sutrisno, Widowati, Tjahjana, R. Heru
2017-12-01
The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.
Cost effectiveness of robotic mitral valve surgery.
Moss, Emmanuel; Halkos, Michael E
2017-01-01
Significant technological advances have led to an impressive evolution in mitral valve surgery over the last two decades, allowing surgeons to safely perform less invasive operations through the right chest. Most new technology comes with an increased upfront cost that must be measured against postoperative savings and other advantages such as decreased perioperative complications, faster recovery, and earlier return to preoperative level of functioning. The Da Vinci robot is an example of such a technology, combining the significant benefits of minimally invasive surgery with a "gold standard" valve repair. Although some have reported that robotic surgery is associated with increased overall costs, there is literature suggesting that efficient perioperative care and shorter lengths of stay can offset the increased capital and intraoperative expenses. While data on current cost is important to consider, one must also take into account future potential value resulting from technological advancement when evaluating cost-effectiveness. Future refinements that will facilitate more effective surgery, coupled with declining cost of technology will further increase the value of robotic surgery compared to traditional approaches.
Ecological connectivity networks in rapidly expanding cities.
Nor, Amal Najihah M; Corstanje, Ron; Harris, Jim A; Grafius, Darren R; Siriwardena, Gavin M
2017-06-01
Urban expansion increases fragmentation of the landscape. In effect, fragmentation decreases connectivity, causes green space loss and impacts upon the ecology and function of green space. Restoration of the functionality of green space often requires restoring the ecological connectivity of this green space within the city matrix. However, identifying ecological corridors that integrate different structural and functional connectivity of green space remains vague. Assessing connectivity for developing an ecological network by using efficient models is essential to improve these networks under rapid urban expansion. This paper presents a novel methodological approach to assess and model connectivity for the Eurasian tree sparrow ( Passer montanus ) and Yellow-vented bulbul ( Pycnonotus goiavier ) in three cities (Kuala Lumpur, Malaysia; Jakarta, Indonesia and Metro Manila, Philippines). The approach identifies potential priority corridors for ecological connectivity networks. The study combined circuit models, connectivity analysis and least-cost models to identify potential corridors by integrating structure and function of green space patches to provide reliable ecological connectivity network models in the cities. Relevant parameters such as landscape resistance and green space structure (vegetation density, patch size and patch distance) were derived from an expert and literature-based approach based on the preference of bird behaviour. The integrated models allowed the assessment of connectivity for both species using different measures of green space structure revealing the potential corridors and least-cost pathways for both bird species at the patch sites. The implementation of improvements to the identified corridors could increase the connectivity of green space. This study provides examples of how combining models can contribute to the improvement of ecological networks in rapidly expanding cities and demonstrates the usefulness of such models for biodiversity conservation and urban planning.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.
Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko
2010-06-28
We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Alternative Energy Science and Policy: Biofuels as a Case Study
NASA Astrophysics Data System (ADS)
Ammous, Saifedean H.
This dissertation studies the science and policy-making of alternative energy using biofuels as a case study, primarily examining the instruments that can be used to alleviate the impacts of climate change and their relative efficacy. Three case studies of policy-making on biofuels in the European Union, United States of America and Brazil are presented and discussed. It is found that these policies have had large unintended negative consequences and that they relied on Lifecycle Analysis studies that had concluded that increased biofuels production can help meet economic, energy and environmental goals. A close examination of these Lifecycle Analysis studies reveals that their results are not conclusive. Instead of continuing to attempt to find answers from Lifecycle Analyses, this study suggests an alternative approach: formulating policy based on recognition of the ignorance of real fuel costs and pollution. Policies to combat climate change are classified into two distinct approaches: policies that place controls on the fuels responsible for emissions and policies that target the pollutants themselves. A mathematical model is constructed to compare these two approaches and address the central question of this study: In light of an ignorance of the cost and pollution impacts of different fuels, are policies targeting the pollutants themselves preferable to policies targeting the fuels? It is concluded that in situations where the cost and pollution functions of a fuel are unknown, subsidies, mandates and caps on the fuel might result in increased or decreased greenhouse gas emissions; on the other hand, a tax or cap on carbon dioxide results in the largest decrease possible of greenhouse gas emissions. Further, controls on greenhouse gases are shown to provide incentives for the development and advancement of cleaner alternative energy options, whereas controls on the fuels are shown to provide equal incentives to the development of cleaner and dirtier alternative fuels. This asymmetry in outcomes---regardless of actual cost functions---is the reason why controls on greenhouse gases are deemed favorable to direct fuel subsidies and mandates.
Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas
2014-06-01
This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.
A Cognitive Engineering Analysis of the Vertical Navigation (VNAV) Function
NASA Technical Reports Server (NTRS)
Sherry, Lance; Feary, Michael; Polson, Peter; Mumaw, Randall; Palmer, Everett
2001-01-01
A cognitive engineering analysis of the Flight Management System (FMS) Vertical Navigation (VNAV) function has identified overloading of the VNAV button and overloading of the Flight Mode Annunciator (FMA) used by the VNAV function. These two types of overloading, resulting in modal input devices and ambiguous feedback, are well known sources of operator confusion, and explain, in part, the operational issues experienced by airline pilots using VNAV in descent and approach. A proposal to modify the existing VNAV design to eliminate the overloading is discussed. The proposed design improves pilot's situational awareness of the VNAV function, and potentially reduces the cost of software development and improves safety.
Bioinspired Wood Nanotechnology for Functional Materials.
Berglund, Lars A; Burgert, Ingo
2018-05-01
It is a challenging task to realize the vision of hierarchically structured nanomaterials for large-scale applications. Herein, the biomaterial wood as a large-scale biotemplate for functionalization at multiple scales is discussed, to provide an increased property range to this renewable and CO 2 -storing bioresource, which is available at low cost and in large quantities. The Progress Report reviews the emerging field of functional wood materials in view of the specific features of the structural template and novel nanotechnological approaches for the development of wood-polymer composites and wood-mineral hybrids for advanced property profiles and new functions. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ekwunife, Obinna I.
2017-01-01
Background Diarrhoea is a leading cause of death in Nigerian children under 5 years. Implementing the most cost-effective approach to diarrhoea management in Nigeria will help optimize health care resources allocation. This study evaluated the cost-effectiveness of various approaches to diarrhoea management namely: the ‘no treatment’ approach (NT); the preventive approach with rotavirus vaccine; the integrated management of childhood illness for diarrhoea approach (IMCI); and rotavirus vaccine plus integrated management of childhood illness for diarrhoea approach (rotavirus vaccine + IMCI). Methods Markov cohort model conducted from the payer’s perspective was used to calculate the cost-effectiveness of the four interventions. The markov model simulated a life cycle of 260 weeks for 33 million children under five years at risk of having diarrhoea (well state). Disability adjusted life years (DALYs) averted was used to quantify clinical outcome. Incremental cost-effectiveness ratio (ICER) served as measure of cost-effectiveness. Results Based on cost-effectiveness threshold of $2,177.99 (i.e. representing Nigerian GDP/capita), all the approaches were very cost-effective but rotavirus vaccine approach was dominated. While IMCI has the lowest ICER of $4.6/DALY averted, the addition of rotavirus vaccine was cost-effective with an ICER of $80.1/DALY averted. Rotavirus vaccine alone was less efficient in optimizing health care resource allocation. Conclusion Rotavirus vaccine + IMCI approach was the most cost-effective approach to childhood diarrhoea management. Its awareness and practice should be promoted in Nigeria. Addition of rotavirus vaccine should be considered for inclusion in the national programme of immunization. Although our findings suggest that addition of rotavirus vaccine to IMCI for diarrhoea is cost-effective, there may be need for further vaccine demonstration studies or real life studies to establish the cost-effectiveness of the vaccine in Nigeria. PMID:29261649
Integration and manufacture of multifunctional planar lightwave circuits
NASA Astrophysics Data System (ADS)
Lipscomb, George F.; Ticknor, Anthony J.; Stiller, Marc A.; Chen, Wenjie; Schroeter, Paul
2001-11-01
The demands of exponentially growing Internet traffic, coupled with the advent of Dense Wavelength Division Multiplexing (DWDM) fiber optic systems to meet those demands, have triggered a revolution in the telecommunications industry. This dramatic change has been built upon, and has driven, improvements in fiber optic component technology. The next generation of systems for the all optical network will require higher performance components coupled with dramatically lower costs. One approach to achieve significantly lower costs per function is to employ Planar Lightwave Circuits (PLC) to integrate multiple optical functions in a single package. PLCs are optical circuits laid out on a silicon wafer, and are made using tools and techniques developed to extremely high levels by the semi-conductor industry. In this way multiple components can be fabricated and interconnected at once, significantly reducing both the manufacturing and the packaging/assembly costs. Currently, the predominant commercial application of PLC technology is arrayed-waveguide gratings (AWG's) for multiplexing and demultiplexing multiple wavelength channels in a DWDM system. Although this is generally perceived as a single-function device, it can be performing the function of more than 100 discrete fiber-optic components and already represents a considerable degree of integration. Furthermore, programmable functions such as variable-optical attenuators (VOAs) and switches made with compatible PLC technology are now moving into commercial production. In this paper, we present results on the integration of active and passive functions together using PLC technology, e.g. a 40 channel AWG multiplexer with 40 individually controllable VOAs.
Estimating the cost of major ongoing cost plus hardware development programs
NASA Technical Reports Server (NTRS)
Bush, J. C.
1990-01-01
Approaches are developed for forecasting the cost of major hardware development programs while these programs are in the design and development C/D phase. Three approaches are developed: a schedule assessment technique for bottom-line summary cost estimation, a detailed cost estimation approach, and an intermediate cost element analysis procedure. The schedule assessment technique was developed using historical cost/schedule performance data.
Overview of ARPA low-cost ceramic composites (LC{sup 3}) program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adler, P.N.
1996-12-31
Grumman is currently leading an approximate $10M ARPA cost-shared program aimed at developing low-cost fabrication methodology for manufacturing ceramic matrix composite (CMC) structural components. One of the program goals is to demonstrate the effectiveness of an advanced materials partnership. A vertically integrated collaboration now exists that combines the talents of three large private sector organizations, two smaller private sector organizations, three universities, and three federal government laboratories. Work in progress involves preceramic polymer (Blackglas{trademark}) CMC materials technology, RTM and pyrolysis process modeling & simulation, and utilization of low-cost approaches for fabricating a CMC demonstration engine seal component. This paper reviewsmore » the program organization, functioning, and some of the highlights of the technical work, which is of interest to the DoD as well as the commercial sector.« less
2016-02-01
functionality in weapon system components. Many steps in the rare earths supply chain, such as mining , are conducted in China, a situation that may pose...functionality in weapon systems components.1 Many steps in the rare earths supply chain, such as mining and refining the ore, are primarily conducted outside...are difficult and costly to mine and process. Rare earth elements are Page 5 GAO-16-161 Rare Earth Materials often classified as either
2006-05-15
alarm performance in a cost-effective manner is the use of track - before - detect strategies, in which multiple sensor detections must occur within the...corresponding to the traditional sensor coverage problem. Also, in the track - before - detect context, reference is made to the field-level functions of...detection and false alarm as successful search and false search, respectively, because the track - before - detect process serves as a searching function
System and method for key generation in security tokens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Philip G.; Humble, Travis S.; Paul, Nathanael R.
Functional randomness in security tokens (FRIST) may achieve improved security in two-factor authentication hardware tokens by improving on the algorithms used to securely generate random data. A system and method in one embodiment according to the present invention may allow for security of a token based on storage cost and computational security. This approach may enable communication where security is no longer based solely on onetime pads (OTPs) generated from a single cryptographic function (e.g., SHA-256).
Promoting adverse drug reaction reporting: comparison of different approaches.
Ribeiro-Vaz, Inês; Santos, Cristina Costa; Cruz-Correia, Ricardo
2016-01-01
To describe different approaches to promote adverse drug reaction reporting among health care professionals, determining their cost-effectiveness. We analyzed and compared several approaches taken by the Northern Pharmacovigilance Centre (Portugal) to promote adverse drug reaction reporting. Approaches were compared regarding the number and relevance of adverse drug reaction reports obtained and costs involved. Costs by report were estimated by adding the initial costs and the running costs of each intervention. These costs were divided by the number of reports obtained with each intervention, to assess its cost-effectiveness. All the approaches seem to have increased the number of adverse drug reaction reports. We noted the biggest increase with protocols (321 reports, costing 1.96 € each), followed by first educational approach (265 reports, 20.31 €/report) and by the hyperlink approach (136 reports, 15.59 €/report). Regarding the severity of adverse drug reactions, protocols were the most efficient approach, costing 2.29 €/report, followed by hyperlinks (30.28 €/report, having no running costs). Concerning unexpected adverse drug reactions, the best result was obtained with protocols (5.12 €/report), followed by first educational approach (38.79 €/report). We recommend implementing protocols in other pharmacovigilance centers. They seem to be the most efficient intervention, allowing receiving adverse drug reactions reports at lower costs. The increase applied not only to the total number of reports, but also to the severity, unexpectedness and high degree of causality attributed to the adverse drug reactions. Still, hyperlinks have the advantage of not involving running costs, showing the second best performance in cost per adverse drug reactions report.
An Assemblable, Multi-Angle Fluorescence and Ellipsometric Microscope
Nguyen, Victoria; Rizzo, John
2016-01-01
We introduce a multi-functional microscope for research laboratories that have significant cost and space limitations. The microscope pivots around the sample, operating in upright, inverted, side-on and oblique geometries. At these geometries it is able to perform bright-field, fluorescence and qualitative ellipsometric imaging. It is the first single instrument in the literature to be able to perform all of these functionalities. The system can be assembled by two undergraduate students from a provided manual in less than a day, from off-the-shelf and 3D printed components, which together cost approximately $16k at 2016 market prices. We include a highly specified assembly manual, a summary of design methodologies, and all associated 3D-printing files in hopes that the utility of the design outlives the current component market. This open design approach prepares readers to customize the instrument to specific needs and applications. We also discuss how to select household LEDs as low-cost light sources for fluorescence microscopy. We demonstrate the utility of the microscope in varied geometries and functionalities, with particular emphasis on studying hydrated, solid-supported lipid films and wet biological samples. PMID:27907008
Simulative design and process optimization of the two-stage stretch-blow molding process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-22
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
Katchman, Benjamin A.; Smith, Joseph T.; Obahiagbon, Uwadiae; Kesiraju, Sailaja; Lee, Yong-Kyun; O’Brien, Barry; Kaftanoglu, Korhan; Blain Christen, Jennifer; Anderson, Karen S.
2016-01-01
Point-of-care molecular diagnostics can provide efficient and cost-effective medical care, and they have the potential to fundamentally change our approach to global health. However, most existing approaches are not scalable to include multiple biomarkers. As a solution, we have combined commercial flat panel OLED display technology with protein microarray technology to enable high-density fluorescent, programmable, multiplexed biorecognition in a compact and disposable configuration with clinical-level sensitivity. Our approach leverages advances in commercial display technology to reduce pre-functionalized biosensor substrate costs to pennies per cm2. Here, we demonstrate quantitative detection of IgG antibodies to multiple viral antigens in patient serum samples with detection limits for human IgG in the 10 pg/mL range. We also demonstrate multiplexed detection of antibodies to the HPV16 proteins E2, E6, and E7, which are circulating biomarkers for cervical as well as head and neck cancers. PMID:27374875
Katchman, Benjamin A; Smith, Joseph T; Obahiagbon, Uwadiae; Kesiraju, Sailaja; Lee, Yong-Kyun; O'Brien, Barry; Kaftanoglu, Korhan; Blain Christen, Jennifer; Anderson, Karen S
2016-07-04
Point-of-care molecular diagnostics can provide efficient and cost-effective medical care, and they have the potential to fundamentally change our approach to global health. However, most existing approaches are not scalable to include multiple biomarkers. As a solution, we have combined commercial flat panel OLED display technology with protein microarray technology to enable high-density fluorescent, programmable, multiplexed biorecognition in a compact and disposable configuration with clinical-level sensitivity. Our approach leverages advances in commercial display technology to reduce pre-functionalized biosensor substrate costs to pennies per cm(2). Here, we demonstrate quantitative detection of IgG antibodies to multiple viral antigens in patient serum samples with detection limits for human IgG in the 10 pg/mL range. We also demonstrate multiplexed detection of antibodies to the HPV16 proteins E2, E6, and E7, which are circulating biomarkers for cervical as well as head and neck cancers.
Toward a Responsibility-Catering Prioritarian Ethical Theory of Risk.
Wikman-Svahn, Per; Lindblom, Lars
2018-03-05
Standard tools used in societal risk management such as probabilistic risk analysis or cost-benefit analysis typically define risks in terms of only probabilities and consequences and assume a utilitarian approach to ethics that aims to maximize expected utility. The philosopher Carl F. Cranor has argued against this view by devising a list of plausible aspects of the acceptability of risks that points towards a non-consequentialist ethical theory of societal risk management. This paper revisits Cranor's list to argue that the alternative ethical theory responsibility-catering prioritarianism can accommodate the aspects identified by Cranor and that the elements in the list can be used to inform the details of how to view risks within this theory. An approach towards operationalizing the theory is proposed based on a prioritarian social welfare function that operates on responsibility-adjusted utilities. A responsibility-catering prioritarian ethical approach towards managing risks is a promising alternative to standard tools such as cost-benefit analysis.
Linear-scaling generation of potential energy surfaces using a double incremental expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk
We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less
Effertz, Glen; Alverson, Dale C; Dion, Denise; Duffy, Veronica; Noon, Charles; Langell, Kevin; Antoniotti, Nina; Lowery, Curtis
2017-02-01
Telehealth centers across the country, including our own center, are addressing sustainability and best practice business models. We undertook this survey to explore the business models being used at other established telehealth centers. In the literature on telehealth and sustainability, there is a paucity of comparative studies as to how successful telehealth centers function. In this study, we compared the business models of 10 successful telehealth centers. We conducted the study by interviewing key individuals at the centers, either through teleconference or telephone. We found that there are five general approaches to sustaining a telehealth center: grants, telehealth network membership fees, income from providing clinical services, per encounter charges, and operating as a cost center. We also found that most centers use more than one approach. We concluded that, although the first four approaches can contribute to the success of a center, telehealth centers are and should remain cost centers for their respective institutions.
Costing the supply chain for delivery of ACT and RDTs in the public sector in Benin and Kenya.
Shretta, Rima; Johnson, Brittany; Smith, Lisa; Doumbia, Seydou; de Savigny, Don; Anupindi, Ravi; Yadav, Prashant
2015-02-05
Studies have shown that supply chain costs are a significant proportion of total programme costs. Nevertheless, the costs of delivering specific products are poorly understood and ballpark estimates are often used to inadequately plan for the budgetary implications of supply chain expenses. The purpose of this research was to estimate the country level costs of the public sector supply chain for artemisinin-based combination therapy (ACT) and rapid diagnostic tests (RDTs) from the central to the peripheral levels in Benin and Kenya. A micro-costing approach was used and primary data on the various cost components of the supply chain was collected at the central, intermediate, and facility levels between September and November 2013. Information sources included central warehouse databases, health facility records, transport schedules, and expenditure reports. Data from document reviews and semi-structured interviews were used to identify cost inputs and estimate actual costs. Sampling was purposive to isolate key variables of interest. Survey guides were developed and administered electronically. Data were extracted into Microsoft Excel, and the supply chain cost per unit of ACT and RDT distributed by function and level of system was calculated. In Benin, supply chain costs added USD 0.2011 to the initial acquisition cost of ACT and USD 0.3375 to RDTs (normalized to USD 1). In Kenya, they added USD 0.2443 to the acquisition cost of ACT and USD 0.1895 to RDTs (normalized to USD 1). Total supply chain costs accounted for more than 30% of the initial acquisition cost of the products in some cases and these costs were highly sensitive to product volumes. The major cost drivers were found to be labour, transport, and utilities with health facilities carrying the majority of the cost per unit of product. Accurate cost estimates are needed to ensure adequate resources are available for supply chain activities. Product volumes should be considered when costing supply chain functions rather than dollar value. Further work is needed to develop extrapolative costing models that can be applied at country level without extensive micro-costing exercises. This will allow other countries to generate more accurate estimates in the future.
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2017-04-01
Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.
Protein function prediction using neighbor relativity in protein-protein interaction network.
Moosavi, Sobhan; Rahgozar, Masoud; Rahimi, Amir
2013-04-01
There is a large gap between the number of discovered proteins and the number of functionally annotated ones. Due to the high cost of determining protein function by wet-lab research, function prediction has become a major task for computational biology and bioinformatics. Some researches utilize the proteins interaction information to predict function for un-annotated proteins. In this paper, we propose a novel approach called "Neighbor Relativity Coefficient" (NRC) based on interaction network topology which estimates the functional similarity between two proteins. NRC is calculated for each pair of proteins based on their graph-based features including distance, common neighbors and the number of paths between them. In order to ascribe function to an un-annotated protein, NRC estimates a weight for each neighbor to transfer its annotation to the unknown protein. Finally, the unknown protein will be annotated by the top score transferred functions. We also investigate the effect of using different coefficients for various types of functions. The proposed method has been evaluated on Saccharomyces cerevisiae and Homo sapiens interaction networks. The performance analysis demonstrates that NRC yields better results in comparison with previous protein function prediction approaches that utilize interaction network. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yuewei; Liu, Jinghai; Wu, Guan; Chen, Wei
2012-08-01
Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization.Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization. Electronic supplementary information (ESI) available: Methods for preparing and characterizing UCN, TCN and DCN samples. Methods for examining the photocatalytic hydrogen production. FTIR, XPS, and digital photos of three products are shown in Fig. S1-6. See DOI: 10.1039/c2nr30948c
Xie, Bin; da Silva, Orlando; Zaric, Greg
2012-01-01
To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature.
Xie, Bin; da Silva, Orlando; Zaric, Greg
2012-01-01
OBJECTIVE: To evaluate the incremental cost-effectiveness of a system-based approach for the management of neonatal jaundice and the prevention of kernicterus in term and late-preterm (≥35 weeks) infants, compared with the traditional practice based on visual inspection and selected bilirubin testing. STUDY DESIGN: Two hypothetical cohorts of 150,000 term and late-preterm neonates were used to compare the costs and outcomes associated with the use of a system-based or traditional practice approach. Data for the evaluation were obtained from the case costing centre at a large teaching hospital in Ontario, supplemented by data from the literature. RESULTS: The per child cost for the system-based approach cohort was $176, compared with $173 in the traditional practice cohort. The higher cost associated with the system-based cohort reflects increased costs for predischarge screening and treatment and increased postdischarge follow-up visits. These costs are partially offset by reduced costs from fewer emergency room visits, hospital readmissions and kernicterus cases. Compared with the traditional approach, the cost to prevent one kernicterus case using the system-based approach was $570,496, the cost per life year gained was $26,279, and the cost per quality-adjusted life year gained was $65,698. CONCLUSION: The cost to prevent one kernicterus case using the system-based approach is much lower than previously reported in the literature. PMID:23277747
NASA Astrophysics Data System (ADS)
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
De Sanctis, A; Russo, S; Craciun, M F; Alexeev, A; Barnes, M D; Nagareddy, V K; Wright, C D
2018-06-06
Graphene-based materials are being widely explored for a range of biomedical applications, from targeted drug delivery to biosensing, bioimaging and use for antibacterial treatments, to name but a few. In many such applications, it is not graphene itself that is used as the active agent, but one of its chemically functionalized forms. The type of chemical species used for functionalization will play a key role in determining the utility of any graphene-based device in any particular biomedical application, because this determines to a large part its physical, chemical, electrical and optical interactions. However, other factors will also be important in determining the eventual uptake of graphene-based biomedical technologies, in particular the ease and cost of manufacture of proposed device and system designs. In this work, we describe three novel routes for the chemical functionalization of graphene using oxygen, iron chloride and fluorine. We also introduce novel in situ methods for controlling and patterning such functionalization on the micro- and nanoscales. Our approaches are readily transferable to large-scale manufacturing, potentially paving the way for the eventual cost-effective production of functionalized graphene-based materials, devices and systems for a range of important biomedical applications.
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
ARMD Strategic Thrust 6: Assured Autonomy for Aviation Transformation
NASA Technical Reports Server (NTRS)
Ballin, Mark; Holbrook, Jon; Sharma, Shivanjli
2016-01-01
In collaboration with the external community and other government agencies, NASA will develop enabling technologies, standards, and design guidelines to support cost-effective applications of automation and limited autonomy for individual components of aviation systems. NASA will also provide foundational knowledge and methods to support the next epoch. Research will address issues of verification and validation, operational evaluation, national policy, and societal cost-benefit. Two research and development approaches to aviation autonomy will advance in parallel. The Increasing Autonomy (IA) approach will seek to advance knowledge and technology through incremental increases in machine-based support of existing human-centered tasks, leading to long-term reallocation of functions between humans and machines. The Autonomy as a New Technology (ANT) approach seeks advances by developing technology to achieve goals that are not currently possible using human-centered concepts of operation. IA applications are mission-enhancing, and their selection will be based on benefits achievable relative to existing operations. ANT applications are mission-enabling, and their value will be assessed based on societal benefit resulting from a new capability. The expected demand for small autonomous unmanned aircraft systems (UAS) provides an opportunity for development of ANT applications. Supervisory autonomy may be implemented as an expansion of the number of functions or systems that may be controlled by an individual human operator. Convergent technology approaches, such as the use of electronic flight bags and existing network servers, will be leveraged to the maximum extent possible.
Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach
ERIC Educational Resources Information Center
Schiltz, Fritz; De Witte, Kristof
2017-01-01
This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…
A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow
NASA Astrophysics Data System (ADS)
Pater, Flavius; Rosu, Serban
2011-09-01
In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.
ERIC Educational Resources Information Center
Rhode, William E.; And Others
In order to examine the possibilities for an advanced multimedia instructional system, a review and assessment of current instructional media was undertaken in terms of a functional description, instructional flexibility, support requirements, and costs. Following this, a model of an individual instructional system was developed as a basis for…
Beyond bricks and mortar: recent research on substance use disorder recovery management.
Dennis, Michael L; Scott, Christy K; Laudet, Alexandre
2014-04-01
Scientific advances in the past 15 years have clearly highlighted the need for recovery management approaches to help individuals sustain recovery from chronic substance use disorders. This article reviews some of the recent findings related to recovery management: (1) continuing care, (2) recovery management checkups, (3) 12-step or mutual aid, and (4) technology-based interventions. The core assumption underlying these approaches is that earlier detection and re-intervention will improve long-term outcomes by minimizing the harmful consequences of the condition and maximizing or promoting opportunities for maintaining healthy levels of functioning in related life domains. Economic analysis is important because it can take a year or longer for such interventions to offset their costs. The article also examines the potential of smartphones and other recent technological developments to facilitate more cost-effective recovery management options.
Full Costing of Business Programs: Benefits and Caveats
ERIC Educational Resources Information Center
Simmons, Cynthia; Wright, Michael; Jones, Vernon
2006-01-01
Purpose: To suggest an approach to program costing that includes the approaches and concepts developed in activity based costing. Design/methodology/approach: The paper utilizes a hypothetical case study of an Executive MBA program as a means of illustrating the suggested approach to costing. Findings: The paper illustrates both the benefits of…
Facile fabrication of microfluidic surface-enhanced Raman scattering devices via lift-up lithography
NASA Astrophysics Data System (ADS)
Wu, Yuanzi; Jiang, Ye; Zheng, Xiaoshan; Jia, Shasha; Zhu, Zhi; Ren, Bin; Ma, Hongwei
2018-04-01
We describe a facile and low-cost approach for a flexibly integrated surface-enhanced Raman scattering (SERS) substrate in microfluidic chips. Briefly, a SERS substrate was fabricated by the electrostatic assembling of gold nanoparticles, and shaped into designed patterns by subsequent lift-up soft lithography. The SERS micro-pattern could be further integrated within microfluidic channels conveniently. The resulting microfluidic SERS chip allowed ultrasensitive in situ SERS monitoring from the transparent glass window. With its advantages in simplicity, functionality and cost-effectiveness, this method could be readily expanded into optical microfluidic fabrication for biochemical applications.
New Concept for FES-Induced Movements
NASA Astrophysics Data System (ADS)
Ahmed, Mohammed; Huq, M. S.; Ibrahim, B. S. K. K.; Ahmed, Aisha; Ahmed, Zainab
2016-11-01
Functional Electrical Stimulation (FES) had become a viable option for movement restoration, therapy and rehabilitation in neurologically impaired subjects. Although the number of such subjects increase globally but only few orthosis devices combine with the technique are available and are costly. A factor resulting to this could be stringent requirement for such devices to have passed clinical acceptance. In that regard a new approach which utilize the patient wheelchair as support and also a novel control system to synchronize the stimulation such that the movement is accomplished safely was proposed. It is expected to improve well-being, social integration, independence, cost, and healthcare delivery.
Promoting adverse drug reaction reporting: comparison of different approaches
Ribeiro-Vaz, Inês; Santos, Cristina Costa; Cruz-Correia, Ricardo
2016-01-01
ABSTRACT OBJECTIVE To describe different approaches to promote adverse drug reaction reporting among health care professionals, determining their cost-effectiveness. METHODS We analyzed and compared several approaches taken by the Northern Pharmacovigilance Centre (Portugal) to promote adverse drug reaction reporting. Approaches were compared regarding the number and relevance of adverse drug reaction reports obtained and costs involved. Costs by report were estimated by adding the initial costs and the running costs of each intervention. These costs were divided by the number of reports obtained with each intervention, to assess its cost-effectiveness. RESULTS All the approaches seem to have increased the number of adverse drug reaction reports. We noted the biggest increase with protocols (321 reports, costing 1.96 € each), followed by first educational approach (265 reports, 20.31 €/report) and by the hyperlink approach (136 reports, 15.59 €/report). Regarding the severity of adverse drug reactions, protocols were the most efficient approach, costing 2.29 €/report, followed by hyperlinks (30.28 €/report, having no running costs). Concerning unexpected adverse drug reactions, the best result was obtained with protocols (5.12 €/report), followed by first educational approach (38.79 €/report). CONCLUSIONS We recommend implementing protocols in other pharmacovigilance centers. They seem to be the most efficient intervention, allowing receiving adverse drug reactions reports at lower costs. The increase applied not only to the total number of reports, but also to the severity, unexpectedness and high degree of causality attributed to the adverse drug reactions. Still, hyperlinks have the advantage of not involving running costs, showing the second best performance in cost per adverse drug reactions report. PMID:27143614
NASA Astrophysics Data System (ADS)
Rhodes, Russel E.; Byrd, Raymond J.
1998-01-01
This paper presents a ``back of the envelope'' technique for fast, timely, on-the-spot, assessment of affordability (profitability) of commercial space transportation architectural concepts. The tool presented here is not intended to replace conventional, detailed costing methodology. The process described enables ``quick look'' estimations and assumptions to effectively determine whether an initial concept (with its attendant cost estimating line items) provides focus for major leapfrog improvement. The Cost Charts Users Guide provides a generic sample tutorial, building an approximate understanding of the basic launch system cost factors and their representative magnitudes. This process will enable the user to develop a net ``cost (and price) per payload-mass unit to orbit'' incorporating a variety of significant cost drivers, supplemental to basic vehicle cost estimates. If acquisition cost and recurring cost factors (as a function of cost per payload-mass unit to orbit) do not meet the predetermined system-profitability goal, the concept in question will be clearly seen as non-competitive. Multiple analytical approaches, and applications of a variety of interrelated assumptions, can be examined in a quick, (on-the-spot) cost approximation analysis as this tool has inherent flexibility. The technique will allow determination of concept conformance to system objectives.
GPU-accelerated adjoint algorithmic differentiation
NASA Astrophysics Data System (ADS)
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2016-03-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.
GPU-Accelerated Adjoint Algorithmic Differentiation.
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2016-03-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.
GPU-Accelerated Adjoint Algorithmic Differentiation
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2015-01-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography. PMID:26941443
NASA Astrophysics Data System (ADS)
Dambreville, Frédéric
2013-10-01
While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.
Robotic Surgical System for Radical Prostatectomy: A Health Technology Assessment
Wang, Myra; Xie, Xuanqian; Wells, David; Higgins, Caroline
2017-01-01
Background Prostate cancer is the second most common type of cancer in Canadian men. Radical prostatectomy is one of the treatment options available, and involves removing the prostate gland and surrounding tissues. In recent years, surgeons have begun to use robot-assisted radical prostatectomy more frequently. We aimed to determine the clinical benefits and harms of the robotic surgical system for radical prostatectomy (robot-assisted radical prostatectomy) compared with the open and laparoscopic surgical methods. We also assessed the cost-effectiveness of robot-assisted versus open radical prostatectomy in patients with clinically localized prostate cancer in Ontario. Methods We performed a literature search and included prospective comparative studies that examined robot-assisted versus open or laparoscopic radical prostatectomy for prostate cancer. The outcomes of interest were perioperative, functional, and oncological. The quality of the body of evidence was examined according to the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) Working Group criteria. We also conducted a cost–utility analysis with a 1-year time horizon. The potential long-term benefits of robot-assisted radical prostatectomy for functional and oncological outcomes were also evaluated in a 10-year Markov model in scenario analyses. In addition, we conducted a budget impact analysis to estimate the additional costs to the provincial budget if the adoption of robot-assisted radical prostatectomy were to increase in the next 5 years. A needs assessment determined that the published literature on patient perspectives was relatively well developed, and that direct patient engagement would add relatively little new information. Results Compared with the open approach, we found robot-assisted radical prostatectomy reduced length of stay and blood loss (moderate quality evidence) but had no difference or inconclusive results for functional and oncological outcomes (low to moderate quality evidence). Compared with laparoscopic radical prostatectomy, robot-assisted radical prostatectomy had no difference in perioperative, functional, and oncological outcomes (low to moderate quality evidence). Compared with open radical prostatectomy, our best estimates suggested that robot-assisted prostatectomy was associated with higher costs ($6,234) and a small gain in quality-adjusted life-years (QALYs) (0.0012). The best estimate of the incremental cost-effectiveness ratio (ICER) was $5.2 million per QALY gained. However, if robot-assisted radical prostatectomy were assumed to have substantially better long-term functional and oncological outcomes, the ICER might be as low as $83,921 per QALY gained. We estimated the annual budget impact to be $0.8 million to $3.4 million over the next 5 years. Conclusions There is no high-quality evidence that robot-assisted radical prostatectomy improves functional and oncological outcomes compared with open and laparoscopic approaches. However, compared with open radical prostatectomy, the costs of using the robotic system are relatively large while the health benefits are relatively small. PMID:28744334
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Lanczos algorithm with matrix product states for dynamical correlation functions
NASA Astrophysics Data System (ADS)
Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.
2012-05-01
The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.
Level set formulation of two-dimensional Lagrangian vortex detection methods
NASA Astrophysics Data System (ADS)
Hadjighasem, Alireza; Haller, George
2016-10-01
We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine. Copyright © 2012 Elsevier B.V. All rights reserved.
Lorenzon, Laura; La Torre, Marco; Ziparo, Vincenzo; Montebelli, Francesco; Mercantini, Paolo; Balducci, Genoveffa; Ferri, Mario
2014-04-07
To report a meta-analysis of the studies that compared the laparoscopic with the open approach for colon cancer resection. Forty-seven manuscripts were reviewed, 33 of which employed for meta-analysis according to the PRISMA guidelines. The results were differentiated according to the study design (prospective randomized trials vs case-control series) and according to the tumor's location. Outcome measures included: (1) short-term results (operating times, blood losses, bowel function recovery, post-operative pain, return to the oral intake, complications and hospital stay); (2) oncological adequateness (number of nodes harvested in the surgical specimens); and (3) long-term results (including the survivals' rates and incidence of incisional hernias) and (4) costs. Meta-analysis of trials provided evidences in support of the laparoscopic procedures for a several short-term outcomes including: a lower blood loss, an earlier recovery of the bowel function, an earlier return to the oral intake, a shorter hospital stay and a lower morbidity rate. Opposite the operating time has been confirmed shorter in open surgery. The same trend has been reported investigating case-control series and cancer by sites, even though there are some concerns regarding the power of the studies in this latter field due to the small number of trials and the small sample of patients enrolled. The two approaches were comparable regarding the mean number of nodes harvested and long-term results, even though these variables were documented reviewing the literature but were not computable for meta-analysis. The analysis of the costs documented lower costs for the open surgery, however just few studies investigated the incidence of post-operative hernias. Laparoscopy is superior for the majority of short-term results. Future studies should better differentiate these approaches on the basis of tumors' location and the post-operative hernias.
MAN-004 Design Standards Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Timothy L.
2014-07-01
At Sandia National Laboratories in New Mexico (SNL/NM), the design, construction, operation, and maintenance of facilities is guided by industry standards, a graded approach, and the systematic analysis of life cycle benefits received for costs incurred. The design of the physical plant must ensure that the facilities are "fit for use," and provide conditions that effectively, efficiently, and safely support current and future mission needs. In addition, SNL/NM applies sustainable design principles, using an integrated whole-building design approach, from site planning to facility design, construction, and operation to ensure building resource efficiency and the health and productivity of occupants. Themore » safety and health of the workforce and the public, any possible effects on the environment, and compliance with building codes take precedence over project issues, such as performance, cost, and schedule. These design standards generally apply to all disciplines on all SNL/NM projects. Architectural and engineering design must be both functional and cost-effective. Facility design must be tailored to fit its intended function, while emphasizing low-maintenance, energy-efficient, and energy-conscious design. Design facilities that can be maintained easily, with readily accessible equipment areas, low maintenance, and quality systems. To promote an orderly and efficient appearance, architectural features of new facilities must complement and enhance the existing architecture at the site. As an Architectural and Engineering (A/E) professional, you must advise the Project Manager when this approach is prohibitively expensive. You are encouraged to use professional judgment and ingenuity to produce a coordinated interdisciplinary design that is cost-effective, easily contractible or buildable, high-performing, aesthetically pleasing, and compliant with applicable building codes. Close coordination and development of civil, landscape, structural, architectural, fire protection, mechanical, electrical, telecommunications, and security features is expected to ensure compatibility with planned functional equipment and to facilitate constructability. If portions of the design are subcontracted to specialists, delivery of the finished design documents must not be considered complete until the subcontracted portions are also submitted for review. You must, along with support consultants, perform functional analyses and programming in developing design solutions. These solutions must reflect coordination of the competing functional, budgetary, and physical requirements for the project. During design phases, meetings between you and the SNL/NM Project Team to discuss and resolve design issues are required. These meetings are a normal part of the design process. For specific design-review requirements, see the project-specific Design Criteria. In addition to the design requirements described in this manual, instructive information is provided to explain the sustainable building practice goals for design, construction, operation, and maintenance of SNL/NM facilities. Please notify SNL/NM personnel of design best practices not included in this manual, so they can be incorporated in future updates.« less
Integrating QoS and security functions in an IP-VPN gateway
NASA Astrophysics Data System (ADS)
Fan, Kuo-Pao; Chang, Shu-Hsin; Lin, Kuan-Ming; Pen, Mau-Jy
2001-10-01
IP-based Virtual Private Network becomes more and more popular. It can not only reduce the enterprise communication cost but also increase the revenue of the service provider. The common IP-VPN application types include Intranet VPN, Extranet VPN, and remote access VPN. For the large IP-VPN market, some vendors develop dedicated IP-VPN devices; while some vendors add the VPN functions into their existing network equipment such as router, access gateway, etc. The functions in the IP-VPN device include security, QoS, and management. The common security functions supported are IPSec (IP Security), IKE (Internet Key Exchange), and Firewall. The QoS functions include bandwidth control and packet scheduling. In the management component, policy-based network management is under standardization in IETF. In this paper, we discuss issues on how to integrate the QoS and security functions in an IP-VPN Gateway. We propose three approaches to do this. They are (1) perform Qos first (2) perform IPSec first and (3) reserve fixed bandwidth for IPSec. We also compare the advantages and disadvantages of the three proposed approaches.
Pisutha-Arnond, N; Chan, V W L; Iyer, M; Gavini, V; Thornton, K
2013-01-01
We introduce a new approach to represent a two-body direct correlation function (DCF) in order to alleviate the computational demand of classical density functional theory (CDFT) and enhance the predictive capability of the phase-field crystal (PFC) method. The approach utilizes a rational function fit (RFF) to approximate the two-body DCF in Fourier space. We use the RFF to show that short-wavelength contributions of the two-body DCF play an important role in determining the thermodynamic properties of materials. We further show that using the RFF to empirically parametrize the two-body DCF allows us to obtain the thermodynamic properties of solids and liquids that agree with the results of CDFT simulations with the full two-body DCF without incurring significant computational costs. In addition, the RFF can also be used to improve the representation of the two-body DCF in the PFC method. Last, the RFF allows for a real-space reformulation of the CDFT and PFC method, which enables descriptions of nonperiodic systems and the use of nonuniform and adaptive grids.
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Liquid on Paper: Rapid Prototyping of Soft Functional Components for Paper Electronics
Long Han, Yu; Liu, Hao; Ouyang, Cheng; Jian Lu, Tian; Xu, Feng
2015-01-01
This paper describes a novel approach to fabricate paper-based electric circuits consisting of a paper matrix embedded with three-dimensional (3D) microchannels and liquid metal. Leveraging the high electric conductivity and good flowability of liquid metal, and metallophobic property of paper, it is possible to keep electric and mechanical functionality of the electric circuit even after a thousand cycles of deformation. Embedding liquid metal into paper matrix is a promising method to rapidly fabricate low-cost, disposable, and soft electric circuits for electronics. As a demonstration, we designed a programmable displacement transducer and applied it as variable resistors and pressure sensors. The unique metallophobic property, combined with softness, low cost and light weight, makes paper an attractive alternative to other materials in which liquid metal are currently embedded. PMID:26129723
A Russian-American approach to the treatment of alcoholism in Russia: preliminary results.
Levine, B G; Nebelkopf, E
1998-01-01
The enormous cost of alcoholism to Russian society threatens to block the current transition towards a functioning democracy. The authors describe the introduction of a 12-Step based psychotherapeutic treatment approach at the Recovery Treatment Center in Moscow. This program is the result of extensive collaboration between American addiction experts, Russian psychologists and recovering alcoholics since 1990. Preliminary outcome data and analysis of in-depth interviews with fifteen patients who successfully completed treatment at this center suggest this approach can be successfully introduced into Russia in a way that has special relevance to the current democratic transformation in the society at large.
Document fraud deterrent strategies: four case studies
NASA Astrophysics Data System (ADS)
Mercer, John W.
1998-04-01
This paper discusses the approaches taken to deter fraud committed against four documents: the machine-readable passport; the machine-readable visa; the Consular Report of Birth Abroad; and the Border Crossing Card. General approaches are discussed first, with an emphasis on the reasons for the document, the conditions of its use and the information systems required for it to function. A cost model of counterfeit deterrence is introduced. Specific approaches to each of the four documents are then discussed, in light of the issuance circumstances and criteria, the intent of the issuing authority, the applicable international standards and the level of protection and fraud resistance appropriate for the document.
Object-oriented productivity metrics
NASA Technical Reports Server (NTRS)
Connell, John L.; Eller, Nancy
1992-01-01
Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.
NASA Technical Reports Server (NTRS)
Shell, Elaine M.; Lue, Yvonne; Chu, Martha I.
1999-01-01
Flight software (FSW) is a mission critical element of spacecraft functionality and performance. When ground operations personnel interface to a spacecraft, they are dealing almost entirely with onboard software. This software, even more than ground/flight communications systems, is expected to perform perfectly at all times during all phases of on-orbit mission life. Due to the fact that FSW can be reconfigured and reprogrammed to accommodate new spacecraft conditions, the on-orbit FSW maintenance team is usually significantly responsible for the long-term success of a science mission. Failure of FSW can result in very expensive operations work-around costs and lost science opportunities. There are three basic approaches to staffing on-orbit software maintenance, namely: (1) using the original developers, (2) using mission operations personnel, or (3) assembling a Center of Excellence for multi-spacecraft on-orbit FSW support. This paper explains a National Aeronautics and Space Administration, Goddard Space Flight Center (NASA/GSFC) experience related to the roles of on-orbit FSW maintenance personnel. It identifies the advantages and disadvantages of each of the three approaches to staffing the FSW roles, and demonstrates how a cost efficient on-orbit FSW Maintenance Center of Excellence can be established and maintained with significant return on the investment.
Chern, Alexander; Hunter, Jacob B; Bennett, Marc L
2017-01-01
To determine if cranioplasty techniques following translabyrinthine approaches to the cerebellopontine angle are cost-effective. Retrospective case series. One hundred eighty patients with available financial data who underwent translabyrinthine approaches at a single academic referral center between 2005 and 2015. Cranioplasty with a dural substitute, layered fat graft, and a resorbable mesh plate secured with screws Main Outcome Measures: billing data was obtained for each patient's hospital course for translabyrinthine approaches and postoperative cerebrospinal fluid (CSF) leaks. One hundred nineteen patients underwent translabyrinthine approaches with an abdominal fat graft closure, with a median cost of $25759.89 (range, $15885.65-$136433.07). Sixty-one patients underwent translabyrinthine approaches with a dural substitute, abdominal fat graft, and a resorbable mesh for closure, with a median cost of $29314.97 (range, $17674.28-$111404.55). The median cost of a CSF leak was $50401.25 (range, $0-$384761.71). The additional cost of a CSF leak when shared by all patients who underwent translabyrinthine approaches is $6048.15. The addition of a dural substitute and a resorbable mesh plate after translabyrinthine approaches reduced the CSF leak from 12 to 1.9%, an 84.2% reduction, and a median savings per patient of $2932.23. Applying our cohort's billing data to previously published cranioplasty techniques, costs, and leak rate improvements after translabyrinthine approaches, all techniques were found to be cost-effective. Resorbable mesh cranioplasty is cost-effective at reducing CSF leaks after translabyrinthine approaches. Per our billing data and achieving the same CSF leak rate, cranioplasty costs exceeding $5090.53 are not cost-effective.
Costs of fire suppression forces based on cost-aggregation approach
Gonz& aacute; lez-Cab& aacute; Armando n; Charles W. McKetta; Thomas J. Mills
1984-01-01
A cost-aggregation approach has been developed for determining the cost of Fire Management Inputs (FMls)-the direct fireline production units (personnel and equipment) used in initial attack and large-fire suppression activities. All components contributing to an FMI are identified, computed, and summed to estimate hourly costs. This approach can be applied to any FMI...
Direct metal transfer printing on flexible substrate for fabricating optics functional devices
NASA Astrophysics Data System (ADS)
Jiang, Yingjie; Zhou, Xiaohong; Zhang, Feng; Shi, Zhenwu; Chen, Linsen; Peng, Changsi
2015-11-01
New functional materials and devices based on metal patterns can be widely used in many new and expanding industries,such as flat panel displays, alternative energy,sensors and so on. In this paper, we introduce a new transfer printing method for fabricating metal optics functional devices. This method can directly transfer a metal pattern from a polyethylene terephthalate (PET)supported UV or polydimethylsiloxane (PDMS) pattern to another PET substrate. Purely taking advantage of the anaerobic UV curing adhesive (a-UV) on PET substrate, metal film can be easily peeled off from micro/nano-structured surface. As a result, metal film on the protrusion can be selectively transferred onto the target substrate, to make it the metal functional surface. But which on the bottom can not be transferred. This method provides low cost fabrication of metal thin film devices by avoiding high cost lithography process. Compared with conventional approach, this method can get more smooth rough edges and has wider tolerance range for the original master mold. Future developments and potential applications of this metal transfer method will be addressed.
Failure Mode Identification Through Clustering Analysis
NASA Technical Reports Server (NTRS)
Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.
The Effect of Publicized Quality Information on Home Health Agency Choice.
Jung, Jeah Kyoungrae; Wu, Bingxiao; Kim, Hyunjee; Polsky, Daniel
2016-12-01
We examine consumers' use of publicized quality information in Medicare home health care markets, where consumer cost sharing and travel costs are absent. We report two findings. First, agencies with high quality scores are more likely to be preferred by consumers after the introduction of a public reporting program than before. Second, consumers' use of publicized quality information differs by patient group. Community-based patients have slightly larger responses to public reporting than hospital-discharged patients. Patients with functional limitations at the start of their care, at least among hospital-discharged patients, have a larger response to the reported functional outcome measure than those without functional limitations. In all cases of significant marginal effects, magnitudes are small. We conclude that the current public reporting approach is unlikely to have critical impacts on home health agency choice. Identifying and releasing quality information that is meaningful to consumers may help increase consumers' use of public reports. © The Author(s) 2015.
Lloyd-Smith, Patrick
2017-12-01
Decisions regarding the optimal provision of infection prevention and control resources depend on accurate estimates of the attributable costs of health care-associated infections. This is challenging given the skewed nature of health care cost data and the endogeneity of health care-associated infections. The objective of this study is to determine the hospital costs attributable to vancomycin-resistant enterococci (VRE) while accounting for endogeneity. This study builds on an attributable cost model conducted by a retrospective cohort study including 1,292 patients admitted to an urban hospital in Vancouver, Canada. Attributable hospital costs were estimated with multivariate generalized linear models (GLMs). To account for endogeneity, a control function approach was used. The analysis sample included 217 patients with health care-associated VRE. In the standard GLM, the costs attributable to VRE are $17,949 (SEM, $2,993). However, accounting for endogeneity, the attributable costs were estimated to range from $14,706 (SEM, $7,612) to $42,101 (SEM, $15,533). Across all model specifications, attributable costs are 76% higher on average when controlling for endogeneity. VRE was independently associated with increased hospital costs, and controlling for endogeneity lead to higher attributable cost estimates. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Pharmacogenetics of clopidogrel: comparison between a standard and a rapid genetic testing.
Saracini, Claudia; Vestrini, Anna; Galora, Silvia; Armillis, Alessandra; Abbate, Rosanna; Giusti, Betti
2012-06-01
CYP2C19 variant alleles are independent predictors of clopidogrel response variability and occurrence of major adverse cardiovascular events in high-risk vascular patients on clopidogrel therapy. Increasing evidence suggests a combination of platelet function testing with CYP2C19 genetic testing may be more effective in identifying high-risk individuals for alternative antiplatelet therapeutic strategies. A crucial point in evaluating the use of these polymorphisms in clinical practice, besides test accuracy, is the cost of the genetic test and rapid availability of the results. One hundred acute coronary syndrome patients were genotyped for CYP2C19*2,*3,*4,*5, and *17 polymorphisms with two platforms: Verigene(®) and the TaqMan(®) system. Genotyping results obtained by the classical TaqMan approach and the rapid Verigene approach showed a 100% concordance for all the five polymorphisms investigated. The Verigene system had shorter turnaround time with respect to TaqMan. The cost of reagents for TaqMan genotyping was lower than that for the Verigene system, but the effective manual staff involvement and the relative cost resulted in higher cost for TaqMan than for Verigene. The Verigene system demonstrated good performance in terms of turnaround time and cost for the evaluation of the clopidogrel poor metabolizer status, giving genetic information in suitable time (206 min) for a therapeutic strategy decision.
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores
NASA Astrophysics Data System (ADS)
Brüggemann, Wolfgang
1992-08-01
The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Scholkmann, Felix; Holper, Lisa; Wolf, Ursula; Wolf, Martin
2013-11-27
Since the first demonstration of how to simultaneously measure brain activity using functional magnetic resonance imaging (fMRI) on two subjects about 10 years ago, a new paradigm in neuroscience is emerging: measuring brain activity from two or more people simultaneously, termed "hyperscanning". The hyperscanning approach has the potential to reveal inter-personal brain mechanisms underlying interaction-mediated brain-to-brain coupling. These mechanisms are engaged during real social interactions, and cannot be captured using single-subject recordings. In particular, functional near-infrared imaging (fNIRI) hyperscanning is a promising new method, offering a cost-effective, easy to apply and reliable technology to measure inter-personal interactions in a natural context. In this short review we report on fNIRI hyperscanning studies published so far and summarize opportunities and challenges for future studies.
Costs of abandoned coal mine reclamation and associated recreation benefits in Ohio.
Mishra, Shruti K; Hitzhusen, Frederick J; Sohngen, Brent L; Guldmann, Jean-Michel
2012-06-15
Two hundred years of coal mining in Ohio have degraded land and water resources, imposing social costs on its citizens. An interdisciplinary approach employing hydrology, geographic information systems, and a recreation visitation function model, is used to estimate the damages from upstream coal mining to lakes in Ohio. The estimated recreational damages to five of the coal-mining-impacted lakes, using dissolved sulfate as coal-mining-impact indicator, amount to $21 Million per year. Post-reclamation recreational benefits from reducing sulfate concentrations by 6.5% and 15% in the five impacted lakes were estimated to range from $1.89 to $4.92 Million per year, with a net present value ranging from $14.56 Million to $37.79 Million. A benefit costs analysis (BCA) of recreational benefits and coal mine reclamation costs provides some evidence for potential Pareto improvement by investing limited resources in reclamation projects. Copyright © 2012 Elsevier Ltd. All rights reserved.
The economics of data acquisition computers for ST and MST radars
NASA Technical Reports Server (NTRS)
Watkins, B. J.
1983-01-01
Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.
Comparing top-down and bottom-up costing approaches for economic evaluation within social welfare.
Olsson, Tina M
2011-10-01
This study compares two approaches to the estimation of social welfare intervention costs: one "top-down" and the other "bottom-up" for a group of social welfare clients with severe problem behavior participating in a randomized trial. Intervention costs ranging over a two-year period were compared by intervention category (foster care placement, institutional placement, mentorship services, individual support services and structured support services), estimation method (price, micro costing, average cost) and treatment group (intervention, control). Analyses are based upon 2007 costs for 156 individuals receiving 404 interventions. Overall, both approaches were found to produce reliable estimates of intervention costs at the group level but not at the individual level. As choice of approach can greatly impact the estimate of mean difference, adjustment based on estimation approach should be incorporated into sensitivity analyses. Analysts must take care in assessing the purpose and perspective of the analysis when choosing a costing approach for use within economic evaluation.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
The x-ray light valve: a low-cost, digital radiographic imaging system-spatial resolution
NASA Astrophysics Data System (ADS)
MacDougall, Robert D.; Koprinarov, Ivaylo; Webster, Christie A.; Rowlands, J. A.
2007-03-01
In recent years, new x-ray radiographic systems based on large area flat panel technology have revolutionized our capability to produce digital x-ray radiographic images. However, these active matrix flat panel imagers (AMFPIs) are extraordinarily expensive compared to the systems they are replacing. Thus there is a need for a low cost digital imaging system for general applications in radiology. Different approaches have been considered to make lower cost, integrated x-ray imaging devices for digital radiography, including: scanned projection x-ray, an integrated approach based on computed radiography technology and optically demagnified x-ray screen/CCD systems. These approaches suffer from either high cost or high mechanical complexity and do not have the image quality of AMFPIs. We have identified a new approach - the X-ray Light Valve (XLV). The XLV has the potential to achieve the immediate readout in an integrated system with image quality comparable to AMFPIs. The XLV concept combines three well-established and hence lowcost technologies: an amorphous selenium (a-Se) layer to convert x-rays to image charge, a liquid crystal (LC) cell as an analog display, and an optical scanner for image digitization. Here we investigate the spatial resolution possible with XLV systems. Both a-Se and LC cells have both been shown separately to have inherently very high spatial resolution. Due to the close electrostatic coupling in the XLV, it can be expected that the spatial resolution of this system will also be very high. A prototype XLV was made and a typical office scanner was used for image digitization. The Modulation Transfer Function was measured and the limiting factor was seen to be the optical scanner. However, even with this limitation the XLV system is able to meet or exceed the resolution requirements for chest radiography.
Development of a Portfolio Management Approach with Case Study of the NASA Airspace Systems Program
NASA Technical Reports Server (NTRS)
Neitzke, Kurt W.; Hartman, Christopher L.
2012-01-01
A portfolio management approach was developed for the National Aeronautics and Space Administration s (NASA s) Airspace Systems Program (ASP). The purpose was to help inform ASP leadership regarding future investment decisions related to its existing portfolio of advanced technology concepts and capabilities (C/Cs) currently under development and to potentially identify new opportunities. The portfolio management approach is general in form and is extensible to other advanced technology development programs. It focuses on individual C/Cs and consists of three parts: 1) concept of operations (con-ops) development, 2) safety impact assessment, and 3) benefit-cost-risk (B-C-R) assessment. The first two parts are recommendations to ASP leaders and will be discussed only briefly, while the B-C-R part relates to the development of an assessment capability and will be discussed in greater detail. The B-C-R assessment capability enables estimation of the relative value of each C/C as compared with all other C/Cs in the ASP portfolio. Value is expressed in terms of a composite weighted utility function (WUF) rating, based on estimated benefits, costs, and risks. Benefit utility is estimated relative to achieving key NAS performance objectives, which are outlined in the ASP Strategic Plan.1 Risk utility focuses on C/C development and implementation risk, while cost utility focuses on the development and implementation portions of overall C/C life-cycle costs. Initial composite ratings of the ASP C/Cs were successfully generated; however, the limited availability of B-C-R information, which is used as inputs to the WUF model, reduced the meaningfulness of these initial investment ratings. Development of this approach, however, defined specific information-generation requirements for ASP C/C developers that will increase the meaningfulness of future B-C-R ratings.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-05-16
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
Analysis of hospital costs as a basis for pricing services in Mali.
Audibert, Martine; Mathonnat, Jacky; Pareil, Delphine; Kabamba, Raymond
2007-01-01
In a move to achieve a better equity in the funding of access to health care, particularly for the poor, a better efficiency of hospital functioning and a better financial balance, the analysis of hospital costs in Mali brings several key elements to improve the pricing of medical services. The method utilized is the classical step-down process which takes into consideration the entire set of direct and indirect costs borne by the hospital. Although this approach does not allow to estimate the economic cost of consultations, it is a useful contribution to assess the financial activity of the hospital and improve its performance, financially speaking, through a more relevant user fees policy. The study shows that there are possibilities of cross-subsidies within the hospital or within services which improve the recovery of some of the current costs. It also leads to several proposals of pricing care while taking into account the constraints, the level of the hospital its specific conditions and equity. Copyright (c) 2007 John Wiley & Sons, Ltd.
Is higher nursing home quality more costly?
Giorgio, L Di; Filippini, M; Masiero, G
2016-11-01
Widespread issues regarding quality in nursing homes call for an improved understanding of the relationship with costs. This relationship may differ in European countries, where care is mainly delivered by nonprofit providers. In accordance with the economic theory of production, we estimate a total cost function for nursing home services using data from 45 nursing homes in Switzerland between 2006 and 2010. Quality is measured by means of clinical indicators regarding process and outcome derived from the minimum data set. We consider both composite and single quality indicators. Contrary to most previous studies, we use panel data and control for omitted variables bias. This allows us to capture features specific to nursing homes that may explain differences in structural quality or cost levels. Additional analysis is provided to address simultaneity bias using an instrumental variable approach. We find evidence that poor levels of quality regarding outcome, as measured by the prevalence of severe pain and weight loss, lead to higher costs. This may have important implications for the design of payment schemes for nursing homes.
A mixed-mode traffic assignment model with new time-flow impedance function
NASA Astrophysics Data System (ADS)
Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang
2018-01-01
Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.
Wilson, Philip; Wood, Rachael; Lykke, Kirsten; Hauskov Graungaard, Anette; Ertmann, Ruth Kirk; Andersen, Merethe Kirstine; Haavet, Ole Rikard; Lagerløv, Per; Abildsnes, Eirik; Dahli, Mina P; Mäkelä, Marjukka; Varinen, Aleksi; Hietanen, Merja
2018-05-01
Few areas of medicine demonstrate such international divergence as child development screening and surveillance. Many countries have nationally mandated surveillance policies, but the content of programmes and mechanisms for delivery vary enormously. The cost of programmes is substantial but no economic evaluations have been carried out. We have critically examined the history, underlying philosophy, content and delivery of programmes for child development assessment in five countries with comprehensive publicly funded health services (Denmark, Finland, Norway, Scotland and Sweden). The specific focus of this article is on motor, social, emotional, behavioural and global cognitive functioning including language. Variations in developmental surveillance programmes are substantially explained by historical factors and gradual evolution although Scotland has undergone radical changes in approach. No elements of universal developmental assessment programmes meet World Health Organization screening criteria, although some assessments are configured as screening activities. The roles of doctors and nurses vary greatly by country as do the timing, content and likely costs of programmes. Inter-professional communication presents challenges to all the studied health services. No programme has evidence for improved health outcomes or cost effectiveness. Developmental surveillance programmes vary greatly and their structure appears to be driven by historical factors as much as by evidence. Consensus should be reached about which surveillance activities constitute screening, and the predictive validity of these components needs to be established and judged against World Health Organization screening criteria. Costs and consequences of specific programmes should be assessed, and the issue of inter-professional communication about children at remediable developmental risk should be prioritised.
Computational study of elements of stability of a four-helix bundle protein biosurfactant
NASA Astrophysics Data System (ADS)
Schaller, Andrea; Connors, Natalie K.; Dwyer, Mirjana Dimitrijev; Oelmeier, Stefan A.; Hubbuch, Jürgen; Middelberg, Anton P. J.
2015-01-01
Biosurfactants are surface-active molecules produced principally by microorganisms. They are a sustainable alternative to chemically-synthesized surfactants, having the advantages of being non-toxic, highly functional, eco-friendly and biodegradable. However they are currently only used in a few industrial products due to costs associated with production and purification, which exceed those for commodity chemical surfactants. DAMP4, a member of a four-helix bundle biosurfactant protein family, can be produced in soluble form and at high yield in Escherichia coli, and can be recovered using a facile thermal phase-separation approach. As such, it encompasses an interesting synergy of biomolecular and chemical engineering with prospects for low-cost production even for industrial sectors. DAMP4 is highly functional, and due to its extraordinary thermal stability it can be purified in a simple two-step process, in which the combination of high temperature and salt leads to denaturation of all contaminants, whereas DAMP4 stays stable in solution and can be recovered by filtration. This study aimed to characterize and understand the fundamental drivers of DAMP4 stability to guide further process and surfactant design studies. The complementary use of experiments and molecular dynamics simulation revealed a broad pH and temperature tolerance for DAMP4, with a melting point of 122.4 °C, suggesting the hydrophobic core as the major contributor to thermal stability. Simulation of systematically created in silico variants of DAMP4 showed an influence of number and location of hydrophilic mutations in the hydrophobic core on stability, demonstrating a tolerance of up to three mutations before a strong loss in stability occurred. The results suggest a consideration of a balance of stability, functionality and kinetics for new designs according to their application, aiming for maximal functionality but at adequate stability to allow for cost-efficient production using thermal phase separation approaches.
Space Station overall management approach for operations
NASA Technical Reports Server (NTRS)
Paules, G.
1986-01-01
An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined.
Breaking through with Thin-Client Technologies: A Cost Effective Approach for Academic Libraries.
ERIC Educational Resources Information Center
Elbaz, Sohair W.; Stewart, Christofer
This paper provides an overview of thin-client/server computing in higher education. Thin-clients are like PCs in appearance, but they do not house hard drives or localized operating systems and cannot function without being connected to a server. Two types of thin-clients are described: the Network Computer (NC) and the Windows Terminal (WT).…
Su, Xin-Yao; Xue, Jian-Ping; Wang, Cai-Xia
2016-11-01
The functional ingredients in Chinese materia medica are the main active substance for traditional Chinese medicine and most of them are secondary metabolites derivatives. Until now,the main method to obtain those functional ingredients is through direct extraction from the Chinese materia medica. However, the income is very low because of the high extraction costs and the decreased medicinal plants. Synthetic biology technology, as a new and microbial approach, can be able to carry out large-scale production of functional ingredients and greatly ease the shortage of traditional Chinese medicine ingredients. This review mainly focused on the recent advances in synthetic biology for the functional ingredients production. Copyright© by the Chinese Pharmaceutical Association.
Optimal design application on the advanced aeroelastic rotor blade
NASA Technical Reports Server (NTRS)
Wei, F. S.; Jones, R.
1985-01-01
The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.
New, strategic outsourcing models to meet changing clinical development needs.
Jones, Janet; Minor, Michael
2010-04-01
The impact of increasing clinical costs and the need for more data to support higher efficacy demands and overcome regulatory hurdles for market entry means that every Company is faced with the challenge of how to do more with a smaller budget. As budgets get squeezed the pharmaceutical Industry has been looking at how to contain or reduce cost and support an increased number of projects. With the growing sophistication of outsourcing, this is an increasingly important area of focus. Some Pharmaceutical Companies have moved from tactical, case by case, outsourcing to new, more strategic relationships, which involve outsourcing functions that were historically held as core pharmaceutical functions. An increasing number of Sponsors are looking at strategic relationships which are based on more creative outsourcing approaches. As the need and sophistication of these outsourcing models and the sponsors / CROs involved in them, these approaches are becoming more transformational and need to be based on a strong partnership. Lessons learned from working with sponsors in a partnership model have been examined and two key challenges addressed in detail: the need for bilateral central control though a strong governance model and the importance of early planning and commitment.
New, Strategic Outsourcing Models to Meet Changing Clinical Development Needs
Jones, Janet; Minor, Michael
2010-01-01
The impact of increasing clinical costs and the need for more data to support higher efficacy demands and overcome regulatory hurdles for market entry means that every Company is faced with the challenge of how to do more with a smaller budget. As budgets get squeezed the pharmaceutical Industry has been looking at how to contain or reduce cost and support an increased number of projects. With the growing sophistication of outsourcing, this is an increasingly important area of focus. Some Pharmaceutical Companies have moved from tactical, case by case, outsourcing to new, more strategic relationships, which involve outsourcing functions that were historically held as core pharmaceutical functions. An increasing number of Sponsors are looking at strategic relationships which are based on more creative outsourcing approaches. As the need and sophistication of these outsourcing models and the sponsors / CROs involved in them, these approaches are becoming more transformational and need to be based on a strong partnership. Lessons learned from working with sponsors in a partnership model have been examined and two key challenges addressed in detail: the need for bilateral central control though a strong governance model and the importance of early planning and commitment. PMID:21829788
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Zhang, Jiang; Liu, Qi; Chen, Huafu; Yuan, Zhen; Huang, Jin; Deng, Lihua; Lu, Fengmei; Zhang, Junpeng; Wang, Yuqing; Wang, Mingwen; Chen, Liangyin
2015-01-01
Clustering analysis methods have been widely applied to identifying the functional brain networks of a multitask paradigm. However, the previously used clustering analysis techniques are computationally expensive and thus impractical for clinical applications. In this study a novel method, called SOM-SAPC that combines self-organizing mapping (SOM) and supervised affinity propagation clustering (SAPC), is proposed and implemented to identify the motor execution (ME) and motor imagery (MI) networks. In SOM-SAPC, SOM was first performed to process fMRI data and SAPC is further utilized for clustering the patterns of functional networks. As a result, SOM-SAPC is able to significantly reduce the computational cost for brain network analysis. Simulation and clinical tests involving ME and MI were conducted based on SOM-SAPC, and the analysis results indicated that functional brain networks were clearly identified with different response patterns and reduced computational cost. In particular, three activation clusters were clearly revealed, which include parts of the visual, ME and MI functional networks. These findings validated that SOM-SAPC is an effective and robust method to analyze the fMRI data with multitasks.
Mitigation of epidemics in contact networks through optimal contact adaptation *
Youssef, Mina; Scoglio, Caterina
2013-01-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209
Mitigation of epidemics in contact networks through optimal contact adaptation.
Youssef, Mina; Scoglio, Caterina
2013-08-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter, a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimate spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an Extended Kalman Filter. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter (EKF), a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimates spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an EKF. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
Final Scientific/Technical Report -- Single-Junction Organic Solar Cells with >15% Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starkenburg, Daken; Weldeab, Asmerom; Fagnani, Dan
Organic solar cells have the potential to offer low-cost solar energy conversion due to low material costs and compatibility with low-temperature and high throughput manufacturing processes. This project aims to further improve the efficiency of organic solar cells by applying a previously demonstrated molecular self-assembly approach to longer-wavelength light-absorbing organic materials. The team at the University of Florida designed and synthesized a series of low-bandgap organic semiconductors with functional hydrogen-bonding groups, studied their assembly characteristics and optoelectronic properties in solid-state thin film, and fabricated organic solar cells using solution processing. These new organic materials absorb light up 800 nm wavelength,more » and provide a maximum open-circuit voltage of 1.05 V in the resulted solar cells. The results further confirmed the effectiveness in this approach to guide the assembly of organic semiconductors in thin films to yield higher photovoltaic performance for solar energy conversion. Through this project, we have gained important understanding on designing, synthesizing, and processing organic semiconductors that contain appropriately functionalized groups to control the morphology of the organic photoactive layer in solar cells. Such fundamental knowledge could be used to further develop new functional organic materials to achieve higher photovoltaic performance, and contribute to the eventual commercialization of the organic solar cell technology.« less
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2016-05-01
One billion children live in war-affected regions of the world. We conducted the first cost-effectiveness analysis of an intervention for war-affected youth in sub-Saharan Africa, as well as a broader cost analysis. The Youth Readiness Intervention (YRI) is a behavioural treatment for reducing functional impairment associated with psychological distress among war-affected young persons. A randomized controlled trial was conducted in Freetown, Sierra Leone, from July 2012 to July 2013. Participants (n = 436, aged 15-24) were randomized to YRI (n = 222) or care as usual (n = 214). Functional impairment was indexed by the World Health Organization Disability Assessment Scale; scores were converted to quality-adjusted life years (QALYs). An 'ingredients approach' estimated financial and economic costs, assuming a societal perspective. Incremental cost-effectiveness ratios (ICERs) were also expressed in terms of gains across dimensions of mental health and schooling. Secondary analyses explored whether intervention effects were largest among those worst-off (upper quartile) at baseline. Retention at 6-month follow-up was 85% (n = 371). The estimated economic cost of the intervention was $104 per participant. Functional impairment was lower among YRI recipients, compared with controls, following the intervention but not at 6-month follow-up, and yielded an ICER of $7260 per QALY gained. At 8-month follow-up, teachers' interviews indicated that YRI recipients observed higher school enrolment [P < 0.001, odds ratio (OR) 8.9], denoting a cost of $431 per additional school year gained, as well as better school attendance (P = 0.007, OR 34.9) and performance (P = 0.03, effect size = -1.31). Secondary analyses indicated that the intervention was cost-effective among those worst-off at baseline, yielding an ICER of $3564 per QALY gained. The YRI is not cost-effective at a willingness-to-pay threshold of three times average gross domestic product per capita. However, results indicate that the YRI translated into a range of benefits, such as improved school enrolment, not captured by cost-effectiveness analysis. We also outline areas for modification to improve cost-effectiveness in future trials. clinicaltrials.gov Identifier: RPCGA-YRI-21003. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A cost comparison of traditional drainage and SUDS in Scotland.
Duffy, A; Jefferies, C; Waddell, G; Shanks, G; Blackwood, D; Watkins, A
2008-01-01
The Dunfermline Eastern Expansion (DEX) is a 350 ha mixed development which commenced in 1996. Downstream water quality and flooding issues necessitated a holistic approach to drainage planning and the site has become a European showcase for the application of Sustainable Urban Drainage Systems (SUDS). However, there is minimal data available regarding the real costs of operating and maintaining SUDS to ensure they continue to perform as per their design function. This remains one of the primary barriers to the uptake and adoption of SUDS. This paper reports on what is understood to be the only study in the UK where actual costs of constructing and maintaining SUDS have been compared to an equivalent traditional drainage solution. To compare SUDS costs with traditional drainage, capital and maintenance costs of underground storage chambers of analogous storage volumes were estimated. A whole life costing methodology was then applied to data gathered. The main objective was to produce a reliable and robust cost comparison between SUDS and traditional drainage. The cost analysis is supportive of SUDS and indicates that well designed and maintained SUDS are more cost effective to construct, and cost less to maintain than traditional drainage solutions which are unable to meet the environmental requirements of current legislation. (c) IWA Publishing 2008.
Design and Synthesis of Multigraft Copolymer Thermoplastic Elastomers: Superelastomers
Wang, Huiqun; Lu, Wei; Wang, Weiyu; ...
2017-09-28
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
Design for Reliability and Safety Approach for the New NASA Launch Vehicle
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Weldon, Danny M.
2007-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new launch vehicles, the ARES I and ARES V. Specifically, the paper addresses the use of an integrated probabilistic functional analysis to support the design analysis cycle and a probabilistic risk assessment (PRA) to support the preliminary design and beyond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqun; Lu, Wei; Wang, Weiyu
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
Spitz, Jérôme; Ridoux, Vincent; Brind'Amour, Anik
2014-09-01
Understanding 'Why a prey is a prey for a given predator?' can be facilitated through trait-based approaches that identify linkages between prey and predator morphological and ecological characteristics and highlight key functions involved in prey selection. Enhanced understanding of the functional relationships between predators and their prey is now essential to go beyond the traditional taxonomic framework of dietary studies and to improve our knowledge of ecosystem functioning for wildlife conservation and management. We test the relevance of a three-matrix approach in foraging ecology among a marine mammal community in the northeast Atlantic to identify the key functional traits shaping prey selection processes regardless of the taxonomy of both the predators and prey. Our study reveals that prey found in the diet of marine mammals possess functional traits which are directly and significantly linked to predator characteristics, allowing the establishment of a functional typology of marine mammal-prey relationships. We found prey selection of marine mammals was primarily shaped by physiological and morphological traits of both predators and prey, confirming that energetic costs of foraging strategies and muscular performance are major drivers of prey selection in marine mammals. We demonstrate that trait-based approaches can provide a new definition of the resource needs of predators. This framework can be used to anticipate bottom-up effects on marine predator population dynamics and to identify predators which are sensitive to the loss of key prey functional traits when prey availability is reduced. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.
Proactive replica checking to assure reliability of data in cloud storage with minimum replication
NASA Astrophysics Data System (ADS)
Murarka, Damini; Maheswari, G. Uma
2017-11-01
The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.
Jacobs, Christopher; Lambourne, Luke; Xia, Yu; ...
2017-01-20
Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Christopher; Lambourne, Luke; Xia, Yu
Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Molinos-Senante, María; Hernández-Sancho, Francesc; Sala-Garrido, Ramón
2012-01-01
The concept of sustainability involves the integration of economic, environmental, and social aspects and this also applies in the field of wastewater treatment. Economic feasibility studies are a key tool for selecting the most appropriate option from a set of technological proposals. Moreover, these studies are needed to assess the viability of transferring new technologies from pilot-scale to full-scale. In traditional economic feasibility studies, the benefits that have no market price, such as environmental benefits, are not considered and are therefore underestimated. To overcome this limitation, we propose a new methodology to assess the economic viability of wastewater treatment technologies that considers internal and external impacts. The estimation of the costs is based on the use of cost functions. To quantify the environmental benefits from wastewater treatment, the distance function methodology is proposed to estimate the shadow price of each pollutant removed in the wastewater treatment. The application of this methodological approach by decision makers enables the calculation of the true costs and benefits associated with each alternative technology. The proposed methodology is presented as a useful tool to support decision making.
Seamless interworking architecture for WBAN in heterogeneous wireless networks with QoS guarantees.
Khan, Pervez; Ullah, Niamat; Ullah, Sana; Kwak, Kyung Sup
2011-10-01
The IEEE 802.15.6 standard is a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics and entertainment applications. Providing high mobility with guaranteed Quality of Service (QoS) to a WBAN user in heterogeneous wireless networks is a challenging task. A WBAN uses a Personal Digital Assistant (PDA) to gather data from body sensors and forwards it to a remote server through wide range wireless networks. In this paper, we present a coexistence study of WBAN with Wireless Local Area Networks (WLAN) and Wireless Wide Area Networks (WWANs). The main issue is interworking of WBAN in heterogenous wireless networks including seamless handover, QoS, emergency services, cooperation and security. We propose a Seamless Interworking Architecture (SIA) for WBAN in heterogenous wireless networks based on a cost function. The cost function is based on power consumption and data throughput costs. Our simulation results show that the proposed scheme outperforms typical approaches in terms of throughput, delay and packet loss rate.
NASA Technical Reports Server (NTRS)
Butler, Madeline J.; Sonneborn, George; Perkins, Dorothy C.
1994-01-01
The Mission Operations and Data Systems Directorate (MO&DSD, Code 500), the Space Sciences Directorate (Code 600), and the Flight Projects Directorate (Code 400) have developed a new approach to combine the science and mission operations for the FUSE mission. FUSE, the last of the Delta-class Explorer missions, will obtain high resolution far ultraviolet spectra (910 - 1220 A) of stellar and extragalactic sources to study the evolution of galaxies and conditions in the early universe. FUSE will be launched in 2000 into a 24-hour highly eccentric orbit. Science operations will be conducted in real time for 16-18 hours per day, in a manner similar to the operations performed today for the International Ultraviolet Explorer. In a radical departure from previous missions, the operations concept combines spacecraft and science operations and data processing functions in a single facility to be housed in the Laboratory for Astronomy and Solar Physics (Code 680). A small missions operations team will provide the spacecraft control, telescope operations and data handling functions in a facility designated as the Science and Mission Operations Center (SMOC). This approach will utilize the Transportable Payload Operations Control Center (TPOCC) architecture for both spacecraft and instrument commanding. Other concepts of integrated operations being developed by the Code 500 Renaissance Project will also be employed for the FUSE SMOC. The primary objective of this approach is to reduce development and mission operations costs. The operations concept, integration of mission and science operations, and extensive use of existing hardware and software tools will decrease both development and operations costs extensively. This paper describes the FUSE operations concept, discusses the systems engineering approach used for its development, and the software, hardware and management tools that will make its implementation feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Applications and requirements for real-time simulators in ground-test facilities
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Blech, Richard A.
1986-01-01
This report relates simulator functions and capabilities to the operation of ground test facilities, in general. The potential benefits of having a simulator are described to aid in the selection of desired applications for a specific facility. Configuration options for integrating a simulator into the facility control system are discussed, and a logical approach to configuration selection based on desired applications is presented. The functional and data path requirements to support selected applications and configurations are defined. Finally, practical considerations for implementation (i.e., available hardware and costs) are discussed.
Rehabilitation Treatment and Progress of Traumatic Brain Injury Dysfunction
Dang, Baoqi; Chen, Wenli; He, Weichun
2017-01-01
Traumatic brain injury (TBI) is a major cause of chronic disability. Worldwide, it is the leading cause of disability in the under 40s. Behavioral problems, mood, cognition, particularly memory, attention, and executive function are commonly impaired by TBI. Spending to assist, TBI survivors with disabilities are estimated to be costly per year. Such impaired functional outcomes following TBI can be improved via various rehabilitative approaches. The objective of the present paper is to review the current rehabilitation treatment of traumatic brain injury in adults. PMID:28491478
Molinos-Senante, María; Mocholí-Arce, Manuel; Sala-Garrido, Ramon
2016-10-15
Water scarcity is one of the main problems faced by many regions in the XXIst century. In this context, the need to reduce leakages from water distribution systems has gained almost universal acceptance. The concept of sustainable economic level of leakage (SELL) has been proposed to internalize the environmental and resource costs within economic level of leakage calculations. However, because these costs are not set by the market, they have not often been calculated. In this paper, the directional-distance function was used to estimate the shadow price of leakages as a proxy of their environmental and resource costs. This is a pioneering approach to the economic valuation of leakage externalities. An empirical application was carried out for the main Chilean water companies. The estimated results indicated that for 2014, the average shadow price of leakages was approximately 32% of the price of the water delivered. Moreover, as a sensitivity analysis, the shadow prices of the leakages were calculated from the perspective of the water companies' managers and the regulator. The methodology and findings of this study are essential for supporting the decision process of reducing leakage, contributing to the improvement of economic, social and environmental efficiency and sustainability of urban water supplies. Copyright © 2016 Elsevier B.V. All rights reserved.
Kuyken, Willem; Nuthall, Elizabeth; Byford, Sarah; Crane, Catherine; Dalgleish, Tim; Ford, Tamsin; Greenberg, Mark T; Ukoumunne, Obioha C; Viner, Russell M; Williams, J Mark G
2017-04-26
Mindfulness-based approaches for adults are effective at enhancing mental health, but few controlled trials have evaluated their effectiveness or cost-effectiveness for young people. The primary aim of this trial is to evaluate the effectiveness and cost-effectiveness of a mindfulness training (MT) programme to enhance mental health, wellbeing and social-emotional behavioural functioning in adolescence. To address this aim, the design will be a superiority, cluster randomised controlled, parallel-group trial in which schools offering social and emotional provision in line with good practice (Formby et al., Personal, Social, Health and Economic (PSHE) Education: A mapping study of the prevalent models of delivery and their effectiveness, 2010; OFSTED, Not Yet Good Enough: Personal, Social, Health and Economic Education in schools, 2013) will be randomised to either continue this provision (control) or include MT in this provision (intervention). The study will recruit and randomise 76 schools (clusters) and 5700 school students aged 12 to 14 years, followed up for 2 years. The study will contribute to establishing if MT is an effective and cost-effective approach to promoting mental health in adolescence. International Standard Randomised Controlled Trials, identifier: ISRCTN86619085 . Registered on 3 June 2016.
Weaknesses in Applying a Process Approach in Industry Enterprises
NASA Astrophysics Data System (ADS)
Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena
2012-12-01
The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.
Stochastic Optimally Tuned Range-Separated Hybrid Density Functional Theory.
Neuhauser, Daniel; Rabani, Eran; Cytter, Yael; Baer, Roi
2016-05-19
We develop a stochastic formulation of the optimally tuned range-separated hybrid density functional theory that enables significant reduction of the computational effort and scaling of the nonlocal exchange operator at the price of introducing a controllable statistical error. Our method is based on stochastic representations of the Coulomb convolution integral and of the generalized Kohn-Sham density matrix. The computational cost of the approach is similar to that of usual Kohn-Sham density functional theory, yet it provides a much more accurate description of the quasiparticle energies for the frontier orbitals. This is illustrated for a series of silicon nanocrystals up to sizes exceeding 3000 electrons. Comparison with the stochastic GW many-body perturbation technique indicates excellent agreement for the fundamental band gap energies, good agreement for the band edge quasiparticle excitations, and very low statistical errors in the total energy for large systems. The present approach has a major advantage over one-shot GW by providing a self-consistent Hamiltonian that is central for additional postprocessing, for example, in the stochastic Bethe-Salpeter approach.
Nonlocal response with local optics
NASA Astrophysics Data System (ADS)
Kong, Jiantao; Shvonski, Alexander J.; Kempa, Krzysztof
2018-04-01
For plasmonic systems too small for classical, local simulations to be valid, but too large for ab initio calculations to be computationally feasible, we developed a practical approach—a nonlocal-to-local mapping that enables the use of a modified local system to obtain the response due to nonlocal effects to lowest order, at the cost of higher structural complexity. In this approach, the nonlocal surface region of a metallic structure is mapped onto a local dielectric film, mathematically preserving the nonlocality of the entire system. The most significant feature of this approach is its full compatibility with conventional, highly efficient finite difference time domain (FDTD) simulation codes. Our optimized choice of mapping is based on the Feibelman's d -function formalism, and it produces an effective dielectric function of the local film that obeys all required sum rules, as well as the Kramers-Kronig causality relations. We demonstrate the power of our approach combined with an FDTD scheme, in a series of comparisons with experiments and ab initio density functional theory calculations from the literature, for structures with dimensions from the subnanoscopic to microscopic range.
Evensen, Stig; Wisløff, Torbjørn; Lystad, June Ullevoldsæter; Bull, Helen; Ueland, Torill; Falkum, Erik
2016-01-01
Schizophrenia is associated with recurrent hospitalizations, need for long-term community support, poor social functioning, and low employment rates. Despite the wide- ranging financial and social burdens associated with the illness, there is great uncertainty regarding prevalence, employment rates, and the societal costs of schizophrenia. The current study investigates 12-month prevalence of patients treated for schizophrenia, employment rates, and cost of schizophrenia using a population-based top-down approach. Data were obtained from comprehensive and mandatory health and welfare registers in Norway. We identified a 12-month prevalence of 0.17% for the entire population. The employment rate among working-age individuals was 10.24%. The societal costs for the 12-month period were USD 890 million. The average cost per individual with schizophrenia was USD 106 thousand. Inpatient care and lost productivity due to high unemployment represented 33% and 29%, respectively, of the total costs. The use of mandatory health and welfare registers enabled a unique and informative analysis on true population-based datasets. PMID:26433216
[Identification of ecological corridors and its importance by integrating circuit theory].
Song, Li Li; Qin, Ming Zhou
2016-10-01
Landscape connectivity is considered as an extraordinarily important factor affecting various ecological processes. The least cost path (LCP) on the basis of minimum cumulative resis-tance model (MCRM) may provide a more efficient approach to identify functional connectivity in heterogeneous landscapes, and is already adopted by the research of landscape functional connecti-vity assessment and ecological corridor simulation. Connectivity model on circuit theory (CMCT) replaced the edges in the graph theory with resistors, cost distance with resistance distance to measure the functional connectivity in heterogeneous landscapes. By means of Linkage Mapper tool and Circuitscape software, the simulated landscape generated from SIMMAP 2.0 software was viewed as the study object in this article, aimed at exploring how to integrate MCRM with CMCT to identify ecological corridors and relative importance of landscape factors. The results showed that two models had their individual advantages and mutual complement. MCRM could effectively identify least cost corridors among habitats. CMCT could effectively identify important landscape factor and pinch point, which had important influence on landscape connectivity. We also found that the position of pinch point was not affected by corridor width, which had obvious advantage in the research of identifying the importance of corridors. The integrated method could provide certain scientific basis for regional ecological protection planning and ecological corridor design.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Cost Modeling for low-cost planetary missions
NASA Technical Reports Server (NTRS)
Kwan, Eric; Habib-Agahi, Hamid; Rosenberg, Leigh
2005-01-01
This presentation will provide an overview of the JPL parametric cost models used to estimate flight science spacecrafts and instruments. This material will emphasize the cost model approaches to estimate low-cost flight hardware, sensors, and instrumentation, and to perform cost-risk assessments. This presentation will also discuss JPL approaches to perform cost modeling and the methodologies and analyses used to capture low-cost vs. key cost drivers.
Eggert, D L; Nielsen, M K
2006-02-01
Three replications of mouse selection populations for high heat loss (MH), low heat loss (ML), and a nonselected control (MC) were used to estimate the feed energy costs of maintenance and gain and to test whether selection had changed these costs. At 21 and 49 d of age, mice were weighed and subjected to dual x-ray densitometry measurement for prediction of body composition. At 21 d, mice were randomly assigned to an ad libitum, an 80% of ad libitum, or a 60% of ad libitum feeding group for 28-d collection of individual feed intake. Data were analyzed using 3 approaches. The first approach was an attempt to partition energy intake between costs for maintenance, fat deposition, and lean deposition for each replicate, sex, and line by multiple regression of feed intake on the sum of daily metabolic weight (kg(0.75)), fat gain, and lean gain. Approach II was a less restrictive attempt to partition energy intake between costs for maintenance and total gain for each replicate, sex, and line by multiple regression of feed intake on the sum of daily metabolic weight and total gain. Approach III used multiple regression on the entire data set with pooled regressions on fat and lean gains, and subclass regressions for maintenance. Contrasts were conducted to test the effect of selection (MH - ML) and asymmetry of selection [(MH + ML)/2 - MC] for the various energy costs. In approach I, there were no differences between lines for costs of maintenance, fat deposition, or protein deposition, but we question our ability to estimate these accurately. In approach II, selection changed both cost of maintenance (P = 0.03) and gain (P = 0.05); MH mice had greater per unit costs than ML mice for both. Asymmetry of the selection response was found in approach II for the cost of maintenance (P = 0.06). In approach III, the effect of selection (P < 0.01) contributed to differences in the maintenance cost, but asymmetry of selection (P > 0.17) was not evident. Sex effects were found for the cost of fat deposition (P = 0.02) in approach I and the cost of gain (P = 0.001) in approach II; females had a greater cost per unit than males. When costs per unit of fat and per unit of lean gain were assumed to be the same for both sexes (approach III), females had a somewhat greater estimate for maintenance cost (P = 0.10). We conclude that selection for heat loss has changed the costs for maintenance per unit size but probably not the costs for gain.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
NASA Technical Reports Server (NTRS)
Buffalano, C.; Fogleman, S.; Gielecki, M.
1976-01-01
A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.
On process optimization considering LCA methodology.
Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío
2012-04-15
The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.
On a cost functional for H2/H(infinity) minimization
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis
1990-01-01
A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.
Advanced space program studies. Overall executive summary
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1977-01-01
NASA and DoD requirements and planning data were used in multidiscipline advanced planning investigations of space operations and associated elements (including man), identification of potential low cost approaches, vehicle design, cost synthesis techniques, technology forecasting and opportunities for DoD technology transfer, and the development near-, mid-, and far-term space initiatives and development plans with emphasis on domestic and military commonality. An overview of objectives and results are presented for the following studies: advanced space planning and conceptual analysis, shuttle users, technology assessment and new opportunities, standardization and program practice, integrated STS operations planning, solid spinning upper stage, and integrated planning support functions.
Application of the GA-BP Neural Network in Earthwork Calculation
NASA Astrophysics Data System (ADS)
Fang, Peng; Cai, Zhixiong; Zhang, Ping
2018-01-01
The calculation of earthwork quantity is the key factor to determine the project cost estimate and the optimization of the scheme. It is of great significance and function in the excavation of earth and rock works. We use optimization principle of GA-BP intelligent algorithm running process, and on the basis of earthwork quantity and cost information database, the design of the GA-BP neural network intelligent computing model, through the network training and learning, the accuracy of the results meet the actual engineering construction of gauge fan requirements, it provides a new approach for other projects the calculation, and has good popularization value.
Economic Analysis of Complex Nuclear Fuel Cycles with NE-COST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganda, Francesco; Dixon, Brent; Hoffman, Edward
The purpose of this work is to present a new methodology, and associated computational tools, developed within the U.S. Department of Energy (U.S. DOE) Fuel Cycle Option Campaign to quantify the economic performance of complex nuclear fuel cycles. The levelized electricity cost at the busbar is generally chosen to quantify and compare the economic performance of different baseload generating technologies, including of nuclear: it is the cost of electricity which renders the risk-adjusted discounted net present value of the investment cash flow equal to zero. The work presented here is focused on the calculation of the levelized cost of electricitymore » of fuel cycles at mass balance equilibrium, which is termed LCAE (Levelized Cost of Electricity at Equilibrium). To alleviate the computational issues associated with the calculation of the LCAE for complex fuel cycles, a novel approach has been developed, which has been called the “island approach” because of its logical structure: a generic complex fuel cycle is subdivided into subsets of fuel cycle facilities, called islands, each containing one and only one type of reactor or blanket and an arbitrary number of fuel cycle facilities. A nuclear economic software tool, NE-COST, written in the commercial programming software MATLAB®, has been developed to calculate the LCAE of complex fuel cycles with the “island” computational approach. NE-COST has also been developed with the capability to handle uncertainty: the input parameters (both unit costs and fuel cycle characteristics) can have uncertainty distributions associated with them, and the output can be computed in terms of probability density functions of the LCAE. In this paper NE-COST will be used to quantify, as examples, the economic performance of (1) current Light Water Reactors (LWR) once-through systems; (2) continuous plutonium recycling in Fast Reactors (FR) with driver and blanket; (3) Recycling of plutonium bred in FR into LWR. For each fuel cycle, the contributions to the total LCAE of the main cost components will be identified.« less
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Zhao, Liping; Zhang, Zefeng; Kolm, Paul; Jasper, Susan; Lewis, Cheryl; Klein, Allan; Weintraub, William
2008-02-01
The ACUTE II study demonstrated that transesophageal echocardiographically guided cardioversion with enoxaparin in patients with atrial fibrillation was associated with shorter initial hospital stay, more normal sinus rhythm at 5 weeks, and no significant differences in stroke, bleeding, or death compared with unfractionated heparin (UFH). The present study evaluated resource use and costs in enoxaparin (n=76) and UFH (n=79) during 5-week follow-up. Resources included initial and subsequent hospitalizations, study drugs, outpatient services, and emergency room visits. Two costing approaches were employed for the hospitalization costing. The first approach was based on the UB-92 formulation of hospital bill and diagnosis-related group. The second approach was based on UB-92 and imputation using multivariable linear regression. Costs for outpatient and emergency room visits were determined from the Medicare fee schedule. Sensitivity analysis was performed to assess the robustness of the results. A bootstrap resample approach was used to obtain the confidence interval (CI) for the cost differences. Costs of initial and subsequent hospitalizations, outpatient procedures, and emergency room visits were lower in the enoxaparin group. Average total costs remained significantly lower for the enoxaparin group for the 2 costing approaches ($5,800 vs $8,167, difference $2,367, 95% CI 855 to 4,388, for the first approach; $7,942 vs $10,076, difference $2,134, 95% CI 437 to 4,207, for the second approach). Sensitivity analysis showed that cost differences between strategies are robust to variation of drug costs. In conclusion, the use of enoxaparin as a bridging therapy is a cost-saving strategy (similar clinical outcomes and lower costs) for atrial fibrillation.
Direct determination approach for the multifractal detrending moving average analysis
NASA Astrophysics Data System (ADS)
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
Microtechnology management considering test and cost aspects for stacked 3D ICs with MEMS
NASA Astrophysics Data System (ADS)
Hahn, K.; Wahl, M.; Busch, R.; Grünewald, A.; Brück, R.
2018-01-01
Innovative automotive systems require complex semiconductor devices currently only available in consumer grade quality. The European project TRACE will develop and demonstrate methods, processes, and tools to facilitate usage of Consumer Electronics (CE) components to be deployable more rapidly in the life-critical automotive domain. Consumer electronics increasingly use heterogeneous system integration methods and "More than Moore" technologies, which are capable to combine different circuit domains (Analog, Digital, RF, MEMS) and which are integrated within SiP or 3D stacks. Making these technologies or at least some of the process steps available under automotive electronics requirements is an important goal to keep pace with the growing demand for information processing within cars. The approach presented in this paper aims at a technology management and recommendation system that covers technology data, functional and non-functional constraints, and application scenarios, and that will comprehend test planning and cost consideration capabilities.
Cost-effective monolithic and hybrid integration for metro and long-haul applications
NASA Astrophysics Data System (ADS)
Clayton, Rick; Carter, Andy; Betty, Ian; Simmons, Timothy
2003-12-01
Today's telecommunication market is characterized by conservative business practices: tight management of costs, low risk investing and incremental upgrades, rather than the more freewheeling approach taken a few years ago. Optimizing optical components for the current and near term market involves substantial integration, but within particular bounds. The emphasis on evolution, in particular, has led to increased standardization of functions and so created extensive opportunities for integrated product offerings. The same standardization that enables commercially successful integrated functions also changes the competitive environment, and changes the emphasis for component development; shifting the innovation priority from raw performance to delivering the most effective integrated products. This paper will discuss, with specific examples from our transmitter, receiver and passives product families, our understanding of the issues based on extensive experience in delivering high end integrated products to the market, and the direction it drives optical components.
A minimum cost tolerance allocation method for rocket engines and robust rocket engine design
NASA Technical Reports Server (NTRS)
Gerth, Richard J.
1993-01-01
Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize methods such as Quality Function Deployment and Taguchi methods. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.
Bahador, Fateme; Sharifian, Roxana; Farhadi, Payam; Jafari, Abdosaleh; Nematolahi, Mohtram; Shokrpour, Nasrin
This study aimed to develop and test a research model that examined 7effective factors on the effectiveness of laboratory information system (LIS) through strategic planning. This research was carried out on total laboratory staff, information technology staff, and laboratory managers in Shiraz (a city in the south of Iran) teaching hospitals by structural equation modeling approach in 2015. The results revealed that there was no significant positive relationship between decisions based on cost-benefit analysis and LIS functionality with LIS effectiveness, but there was a significant positive relationship between other factors and LIS effectiveness. As expected, high levels of strategic information system planning result in increasing LIS effectiveness. The results also showed that the relationship between cost-benefit analysis, LIS functionality, end-user involvement, and information technology-business alignment with strategic information system planning was significant and positive.
Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron
2015-02-01
This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.
Avionics upgrade strategies for the Space Shuttle and derivatives
NASA Astrophysics Data System (ADS)
Swaim, Richard A.; Wingert, William B.
Some approaches aimed at providing a low-cost, low-risk strategy to upgrade the shuttle onboard avionics are described. These approaches allow migration to a shuttle-derived vehicle and provide commonality with Space Station Freedom avionics to the extent practical. Some goals of the Shuttle cockpit upgrade include: offloading of the main computers by distributing avionics display functions, reducing crew workload, reducing maintenance cost, and providing display reconfigurability and context sensitivity. These goals are being met by using a combination of off-the-shelf and newly developed software and hardware. The software will be developed using Ada. Advanced active matrix liquid crystal displays are being used to meet the tight space, weight, and power consumption requirements. Eventually, it is desirable to upgrade the current shuttle data processing system with a system that has more in common with the Space Station data management system. This will involve not only changes in Space Shuttle onboard hardware, but changes in the software. Possible approaches to maximizing the use of the existing software base while taking advantage of new language capabilities are discussed.
NASA Technical Reports Server (NTRS)
Hou, Tan-Hung
2014-01-01
For the fabrication of resin matrix fiber reinforced composite laminates, a workable cure cycle (i.e., temperature and pressure profiles as a function of processing time) is needed and is critical for achieving void-free laminate consolidation. Design of such a cure cycle is not trivial, especially when dealing with reactive matrix resins. An empirical "trial and error" approach has been used as common practice in the composite industry. Such an approach is not only costly, but also ineffective at establishing the optimal processing conditions for a specific resin/fiber composite system. In this report, a rational "processing science" based approach is established, and a universal cure cycle design protocol is proposed. Following this protocol, a workable and optimal cure cycle can be readily and rationally designed for most reactive resin systems in a cost effective way. This design protocol has been validated through experimental studies of several reactive polyimide composites for a wide spectrum of usage that has been documented in the previous publications.
Active damage interrogation system for structural health monitoring
NASA Astrophysics Data System (ADS)
Lichtenwalner, Peter F.; Dunne, James P.; Becker, Ronald S.; Baumann, Erwin W.
1997-05-01
An integrated and automated smart structures approach for in situ damage assessment has been implemented and evaluated in a laboratory environment for health monitoring of a realistic aerospace structural component. This approach, called Active Damage Interrogation (ADI), utilizes an array of piezoelectric transducers attached to or embedded within the structure for both actuation and sensing. The ADI system, which is model independent, actively interrogates the structure through broadband excitation of multiple actuators across the desired frequency range. Statistical analysis of the changes in transfer functions between actuator/sensor pairs is used to detect, localize, and assess the severity of damage in the structure. This paper presents the overall concept of the ADI system and provides experimental results of damage assessment studies conducted for a composite structural component of the MD-900 Explorer helicopter rotor system. The potential advantages of this approach include simplicity (no need for a model), sensitivity, and low cost implementation. The results obtained thus far indicate considerably promise for integrated structural health monitoring of aerospace vehicles, leading to the practice of condition-based maintenance and consequent reduction in life cycle costs.
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
The microwave rectifier technology, approaches to the receiving antenna, topology of rectenna circuits, assembly and construction, ROM cost estimates are discussed. Analyses and cost estimates for the equipment required to transmit the ground power to an external user. Noise and harmonic considerations are presented for both the amplitron and klystron and interference limits are identified and evaluated. The risk assessment discussion is discussed wherein technology risks are rated and ranked with regard to their importance in impacting the microwave power transmission system. The system analyses and evaluation are included of parametric studies of system relationships pertaining to geometry, materials, specific cost, specific weight, efficiency, converter packing, frequency selection, power distribution, power density, power output magnitude, power source, transportation and assembly. Capital costs per kW and energy costs as a function of rate of return, power source and transportation costs as well as build cycle time are presented. The critical technology and ground test program are discussed along with ROM costs and schedule. The orbital test program with associated critical technology and ground based program based on full implementation of the defined objectives is discussed.
Small Habitat Commonality Reduces Cost for Human Mars Missions
NASA Technical Reports Server (NTRS)
Griffin, Brand N.; Lepsch, Roger; Martin, John; Howard, Robert; Rucker, Michelle; Zapata, Edgar; McCleskey, Carey; Howe, Scott; Mary, Natalie; Nerren, Philip (Inventor)
2015-01-01
Most view the Apollo Program as expensive. It was. But, a human mission to Mars will be orders of magnitude more difficult and costly. Recently, NASA's Evolvable Mars Campaign (EMC) mapped out a step-wise approach for exploring Mars and the Mars-moon system. It is early in the planning process but because approximately 80% of the total life cycle cost is committed during preliminary design, there is an effort to emphasize cost reduction methods up front. Amongst the options, commonality across small habitat elements shows promise for consolidating the high bow-wave costs of Design, Development, Test and Evaluation (DDT&E) while still accommodating each end-item's functionality. In addition to DDT&E, there are other cost and operations benefits to commonality such as reduced logistics, simplified infrastructure integration and with inter-operability, improved safety and simplified training. These benefits are not without a cost. Some habitats are sub-optimized giving up unique attributes for the benefit of the overall architecture and because the first item sets the course for those to follow, rapidly developing technology may be excluded. The small habitats within the EMC include the pressurized crew cabins for the ascent vehicle,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinman, N.D.; Yancey, M.A.
1997-12-31
One of the main functions of government is to invest taxpayers dollars in projects, programs, and properties that will result in social benefit. Public programs focused on the development of technology are examples of such opportunities. Selecting these programs requires the same investment analysis approaches that private companies and individuals use. Good use of investment analysis approaches to these programs will minimize our tax costs and maximize public benefit from tax dollars invested. This article describes the use of the net present value (NPV) analysis approach to select public R&D programs and valuate expected private sector participation in the programs.more » 5 refs.« less
A time-parallel approach to strong-constraint four-dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Rao, Vishwas; Sandu, Adrian
2016-05-01
A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.
Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.
2013-01-01
This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.
Research opportunities to advance solar energy utilization.
Lewis, Nathan S
2016-01-22
Major developments, as well as remaining challenges and the associated research opportunities, are evaluated for three technologically distinct approaches to solar energy utilization: solar electricity, solar thermal, and solar fuels technologies. Much progress has been made, but research opportunities are still present for all approaches. Both evolutionary and revolutionary technology development, involving foundational research, applied research, learning by doing, demonstration projects, and deployment at scale will be needed to continue this technology-innovation ecosystem. Most of the approaches still offer the potential to provide much higher efficiencies, much lower costs, improved scalability, and new functionality, relative to the embodiments of solar energy-conversion systems that have been developed to date. Copyright © 2016, American Association for the Advancement of Science.
Error and Uncertainty Analysis for Ecological Modeling and Simulation
2001-12-01
management (LRAM) accounting for environmental, training, and economic factors. In the ELVS methodology, soil erosion status is used as a quantitative...Monte-Carlo approach. The optimization is realized through economic functions or on decision constraints, such as, unit sample cost, number of samples... nitrate flux to the Gulf of Mexico. Nature (Brief Communication) 414: 166-167. (Uncertainty analysis done with SERDP software) Gertner, G., G
Applying Lean principles and Kaizen rapid improvement events in public health practice.
Smith, Gene; Poteat-Godwin, Annah; Harrison, Lisa Macon; Randolph, Greg D
2012-01-01
This case study describes a local home health and hospice agency's effort to implement Lean principles and Kaizen methodology as a rapid improvement approach to quality improvement. The agency created a cross-functional team, followed Lean Kaizen methodology, and made significant improvements in scheduling time for home health nurses that resulted in reduced operational costs, improved working conditions, and multiple organizational efficiencies.
Fabrication process scale-up and optimization for a boron-aluminum composite radiator
NASA Technical Reports Server (NTRS)
Okelly, K. P.
1973-01-01
Design approaches to a practical utilization of a boron-aluminum radiator for the space shuttle orbiter are presented. The program includes studies of laboratory composite material processes to determine the feasibility of a structural and functional composite radiator panel, and to estimate the cost of its fabrication. The objective is the incorporation of boron-aluminum modulator radiator on the space shuttle.
Meyer-Rath, Gesine; Over, Mead
2012-01-01
Policy discussions about the feasibility of massively scaling up antiretroviral therapy (ART) to reduce HIV transmission and incidence hinge on accurately projecting the cost of such scale-up in comparison to the benefits from reduced HIV incidence and mortality. We review the available literature on modelled estimates of the cost of providing ART to different populations around the world, and suggest alternative methods of characterising cost when modelling several decades into the future. In past economic analyses of ART provision, costs were often assumed to vary by disease stage and treatment regimen, but for treatment as prevention, in particular, most analyses assume a uniform cost per patient. This approach disregards variables that can affect unit cost, such as differences in factor prices (i.e., the prices of supplies and services) and the scale and scope of operations (i.e., the sizes and types of facilities providing ART). We discuss several of these variables, and then present a worked example of a flexible cost function used to determine the effect of scale on the cost of a proposed scale-up of treatment as prevention in South Africa. Adjusting previously estimated costs of universal testing and treatment in South Africa for diseconomies of small scale, i.e., more patients being treated in smaller facilities, adds 42% to the expected future cost of the intervention. PMID:22802731
Analysis of nursing home capital reimbursement systems
Boerstler, Heidi; Carlough, Tom; Schlenker, Robert E.
1991-01-01
An increasing number of States are using a fair-rental approach for reimbursement of nursing home capital costs. In this study, two variants of the fair-rental capital-reimbursement approach are compared with the traditional cost-based approach in terms of after-tax cash flow to the investor, cost to the State, and rate of return to investor. Simulation models were developed to examine the effects of each capital-reimbursement approach both at specific points in time and over various periods of time. Results indicate that although long-term costs were similar for the three systems, both fair-rental approaches may be superior to the traditional cost-based approach in promoting and controlling industry stability and, at the same time, in providing an adequate return to investors. PMID:10110878
van der Donk, Marthe L A; Hiemstra-Beernink, Anne-Claire; Tjeenk-Kalff, Ariane C; van der Leij, Aryan V; Lindauer, Ramón J L
2013-01-11
Deficits in executive functioning are of great significance in attention-deficit/hyperactivity disorder (ADHD). One of these executive functions, working memory, plays an important role in academic performance and is often seen as the core deficit of this disorder. There are indications that working memory problems and academic performance can be improved by school-oriented interventions but this has not yet been studied systematically. In this study we will determine the short- and long-term effects of a working memory--and an executive function training applied in a school situation for children with AD(H)D, taking individual characteristics, the level of impairment and costs (stepped-care approach) into account. The study consists of two parts: the first part is a randomised controlled trial with school-aged children (8-12 yrs) with AD(H)D. Two groups (each n = 50) will be randomly assigned to a well studied computerized working memory training 'Cogmed', or to the 'Paying attention in class' intervention which is an experimental school-based executive function training. Children will be selected from regular -and special education primary schools in the region of Amsterdam, the Netherlands. The second part of the study will determine which specific characteristics are related to non-response of the 'Paying attention in class' intervention. School-aged children (8-12 yrs) with AD(H)D will follow the experimental school-based executive function training 'Paying attention in class' (n = 175). Academic performance and neurocognitive functioning (primary outcomes) are assessed before, directly after and 6 months after training. Secondary outcome measures are: behaviour in class, behaviour problems and quality of life. So far, there is limited but promising evidence that working memory - and other executive function interventions can improve academic performance. Little is know about the applicability and generalization effects of these interventions in a classroom situation. This study will contribute to this lack of information, especially information related to real classroom and academic situations. By taking into account the costs of both interventions, level of impairment and individual characteristics of the child (stepped-care approach) we will be able to address treatment more adequately for each individual in the future. Nederlands Trial Register NTR3415.
Single conducting polymer nanowire based conductometric sensors
NASA Astrophysics Data System (ADS)
Bangar, Mangesh Ashok
The detection of toxic chemicals, gases or biological agents at very low concentrations with high sensitivity and selectivity has been subject of immense interest. Sensors employing electrical signal readout as transduction mechanism offer easy, label-free detection of target analyte in real-time. Traditional thin film sensors inherently suffered through loss of sensitivity due to current shunting across the charge depleted/added region upon analyte binding to the sensor surface, due to their large cross sectional area. This limitation was overcome by use of nanostructure such as nanowire/tube as transducer where current shunting during sensing was almost eliminated. Due to their benign chemical/electrochemical fabrication route along with excellent electrical properties and biocompatibility, conducting polymers offer cost-effective alternative over other nanostructures. Biggest obstacle in using these nanostructures is lack of easy, scalable and cost-effective way of assembling these nanostructures on prefabricated micropatterns for device fabrication. In this dissertation, three different approaches have been taken to fabricate individual or array of single conducting polymer (and metal) nanowire based devices and using polymer by itself or after functionalization with appropriate recognition molecule they have been applied for gas and biochemical detection. In the first approach electrochemical fabrication of multisegmented nanowires with middle functional Ppy segment along with ferromagnetic nickel (Ni) and end gold segments for better electrical contact was studied. This multi-layered nanowires were used along with ferromagnetic contact electrode for controlled magnetic assembly of nanowires into devices and were used for ammonia gas sensing. The second approach uses conducting polymer, polypyrrole (Ppy) nanowires using simple electrophoretic alignment and maskless electrodeposition to anchor nanowire which were further functionalized with antibodies against cancer marker protein (Cancer Antigen, CA 125) using covalent immobilization for detection of CA 125 in buffer and human blood plasma. Third approach combined electrochemical deposition of conducting polymer and assembly steps into a single step fabrication & functionalization using e-beam lithographically patterned nano-channels. Using this method array of Ppy nanowires were fabricated. Further during fabrication step, by entrapping recognition molecule (avidin) biofunctionalization was achieved. Subsequently these sensors were used for detection of biotinylated single stranded DNA.
Chen, Letian; Wang, Fengpin; Wang, Xiaoyu; Liu, Yao-Guang
2013-01-01
Functional genomics requires vector construction for protein expression and functional characterization of target genes; therefore, a simple, flexible and low-cost molecular manipulation strategy will be highly advantageous for genomics approaches. Here, we describe a Ω-PCR strategy that enables multiple types of sequence modification, including precise insertion, deletion and substitution, in any position of a circular plasmid. Ω-PCR is based on an overlap extension site-directed mutagenesis technique, and is named for its characteristic Ω-shaped secondary structure during PCR. Ω-PCR can be performed either in two steps, or in one tube in combination with exonuclease I treatment. These strategies have wide applications for protein engineering, gene function analysis and in vitro gene splicing. PMID:23335613
NASA Astrophysics Data System (ADS)
Schiattone, Francesco; Bonino, Stefano; Gobbi, Luigi; Groppi, Angelamaria; Marazzi, Marco; Musio, Maurizio
2003-04-01
In the past the optical component market has been mainly driven by performances. Today, as the number of competitors has drastically increased, the system integrators have a wide range of possible suppliers and solutions giving them the possibility to be more focused on cost and also on footprint reduction. So, if performances are still essential, low cost and Small Form Factor issues are becoming more and more crucial in selecting components. Another evolution in the market is the current request of the optical system companies to simplify the supply chain in order to reduce the assembling and testing steps at system level. This corresponds to a growing demand in providing subassemblies, modules or hybrid integrated components: that means also Integration will be an issue in which all the optical component companies will compete to gain market shares. As we can see looking several examples offered by electronic market, to combine low cost and SFF is a very challenging task but Integration can help in achieving both features. In this work we present how these issues could be approached giving examples of some advanced solutions applied to LiNbO3 modulators. In particular we describe the progress made on automation, new materials and low cost fabrication methods for the parts. We also introduce an approach in integrating optical and electrical functionality on LiNbO3 modulators including RF driver, bias control loop, attenuator and photodiode integrated in a single device.
NASA Astrophysics Data System (ADS)
Caldwell, Douglas Wyche
Commercial microcontrollers--monolithic integrated circuits containing microprocessor, memory and various peripheral functions--such as are used in industrial, automotive and military applications, present spacecraft avionics system designers an appealing mix of higher performance and lower power together with faster system-development time and lower unit costs. However, these parts are not radiation-hardened for application in the space environment and Single-Event Effects (SEE) caused by high-energy, ionizing radiation present a significant challenge. Mitigating these effects with techniques which require minimal additional support logic, and thereby preserve the high functional density of these devices, can allow their benefits to be realized. This dissertation uses fault-tolerance to mitigate the transient errors and occasional latchups that non-hardened microcontrollers can experience in the space radiation environment. Space systems requirements and the historical use of fault-tolerant computers in spacecraft provide context. Space radiation and its effects in semiconductors define the fault environment. A reference architecture is presented which uses two or three microcontrollers with a combination of hardware and software voting techniques to mitigate SEE. A prototypical spacecraft function (an inertial measurement unit) is used to illustrate the techniques and to explore how real application requirements impact the fault-tolerance approach. Low-cost approaches which leverage features of existing commercial microcontrollers are analyzed. A high-speed serial bus is used for voting among redundant devices and a novel wire-OR output voting scheme exploits the bidirectional controls of I/O pins. A hardware testbed and prototype software were constructed to evaluate two- and three-processor configurations. Simulated Single-Event Upsets (SEUs) were injected at high rates and the response of the system monitored. The resulting statistics were used to evaluate technical effectiveness. Fault-recovery probabilities (coverages) higher than 99.99% were experimentally demonstrated. The greater than thousand-fold reduction in observed effects provides performance comparable with SEE tolerance of tested, rad-hard devices. Technical results were combined with cost data to assess the cost-effectiveness of the techniques. It was found that a three-processor system was only marginally more effective than a two-device system at detecting and recovering from faults, but consumed substantially more resources, suggesting that simpler configurations are generally more cost-effective.
The choice of sample size: a mixed Bayesian / frequentist approach.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John
2009-04-01
Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.
Cost-effective conservation of amphibian ecology and evolution
Campos, Felipe S.; Lourenço-de-Moraes, Ricardo; Llorente, Gustavo A.; Solé, Mirco
2017-01-01
Habitat loss is the most important threat to species survival, and the efficient selection of priority areas is fundamental for good systematic conservation planning. Using amphibians as a conservation target, we designed an innovative assessment strategy, showing that prioritization models focused on functional, phylogenetic, and taxonomic diversity can include cost-effectiveness–based assessments of land values. We report new key conservation sites within the Brazilian Atlantic Forest hot spot, revealing a congruence of ecological and evolutionary patterns. We suggest payment for ecosystem services through environmental set-asides on private land, establishing potential trade-offs for ecological and evolutionary processes. Our findings introduce additional effective area-based conservation parameters that set new priorities for biodiversity assessment in the Atlantic Forest, validating the usefulness of a novel approach to cost-effectiveness–based assessments of conservation value for other species-rich regions. PMID:28691084
Bridge approach slabs for Missouri DOT looking at alternative and cost efficient approaches.
DOT National Transportation Integrated Search
2010-12-01
The objective of this project is to develop innovative and cost effective structural solutions for the construction of : both new and replacement deteriorated Bridge Approach Slabs (BAS). A cost study and email survey was performed to identify : stat...
Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna
2016-01-01
Abstract Purpose Estimating the incremental costs of scaling‐up novel technologies in low‐income and middle‐income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low‐income and middle‐income countries, using the example of costing the scale‐up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. Materials and methods We estimate costs, by applying two distinct approaches of bottom‐up and top‐down costing, together with an assessment of processes and capacity. Results The unit costs measured using the different methods of bottom‐up and top‐down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Conclusion Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource‐use data collected from a bottom‐up or top‐down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. PMID:26763594
Cunnama, Lucy; Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna
2016-02-01
Estimating the incremental costs of scaling-up novel technologies in low-income and middle-income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low-income and middle-income countries, using the example of costing the scale-up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. We estimate costs, by applying two distinct approaches of bottom-up and top-down costing, together with an assessment of processes and capacity. The unit costs measured using the different methods of bottom-up and top-down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource-use data collected from a bottom-up or top-down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.; Clark, Kara; Bloom, Aaron P.
A common approach to regulating electricity is through auction-based competitive wholesale markets. The goal of this approach is to provide a reliable supply of power at the lowest reasonable cost to the consumer. This necessitates market structures and operating rules that ensure revenue sufficiency for all generators needed for resource adequacy purposes. Wholesale electricity markets employ marginal-cost pricing to provide cost-effective dispatch such that resources are compensated for their operational costs. However, marginal-cost pricing alone cannot guarantee cost recovery outside of perfect competition, and electricity markets have at least six attributes that preclude them from functioning as perfectly competitive markets.more » These attributes include market power, externalities, public good attributes, lack of storage, wholesale price caps, and ineffective demand curve. Until (and unless) these failures are ameliorated, some form of corrective action(s) will be necessary to improve market efficiency so that prices can correctly reflect the needed level of system reliability. Many of these options necessarily involve some form of administrative or out-of-market actions, such as scarcity pricing, capacity payments, bilateral or other out-of-market contracts, or some hybrid combination. A key focus with these options is to create a connection between the electricity market and long-term reliability/loss-of-load expectation targets, which are inherently disconnected in the native markets because of the aforementioned market failures. The addition of variable generation resources can exacerbate revenue sufficiency and resource adequacy concerns caused by these underlying market failures. Because variable generation resources have near-zero marginal costs, they effectively suppress energy prices and reduce the capacity factors of conventional generators through the merit-order effect in the simplest case of a convex market; non-convexities can also suppress prices.« less
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver L.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie
2018-02-20
Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.
School District Program Cost Accounting: An Alternative Approach
ERIC Educational Resources Information Center
Hentschke, Guilbert C.
1975-01-01
Discusses the value for school districts of a program cost accounting system and examines different approaches to generating program cost data, with particular emphasis on the "cost allocation to program system" (CAPS) and the traditional "transaction-based system." (JG)
Cost Comparison Model: Blended eLearning versus traditional training of community health workers.
Sissine, Mysha; Segan, Robert; Taylor, Mathew; Jefferson, Bobby; Borrelli, Alice; Koehler, Mohandas; Chelvayohan, Meena
2014-01-01
Another one million community healthcare workers are needed to address the growing global population and increasing demand of health care services. This paper describes a cost comparison between two training approaches to better understand costs implications of training community health workers (CHWs) in Sub-Saharan Africa. Our team created a prospective model to forecast and compare the costs of two training methods as described in the Dalburge Report - (1) a traditional didactic training approach ("baseline") and (2) a blended eLearning training approach ("blended"). After running the model for training 100,000 CHWs, we compared the results and scaled up those results to one million CHWs. A substantial difference exists in total costs between the baseline and blended training programs. RESULTS indicate that using a blended eLearning approach for training community health care workers could provide a total cost savings of 42%. Scaling the model to one million CHWs, the blended eLearning training approach reduces total costs by 25%. The blended eLearning savings are a result of decreased classroom time, thereby reducing the costs associated with travel, trainers and classroom costs; and using a tablet with WiFi plus a feature phone rather than a smartphone with data plan. The results of this cost analysis indicate significant savings through using a blended eLearning approach in comparison to a traditional didactic method for CHW training by as much as 67%. These results correspond to the Dalberg publication which indicates that using a blended eLearning approach is an opportunity for closing the gap in training community health care workers.
Renaissance architecture for Ground Data Systems
NASA Technical Reports Server (NTRS)
Perkins, Dorothy C.; Zeigenfuss, Lawrence B.
1994-01-01
The Mission Operations and Data Systems Directorate (MO&DSD) has embarked on a new approach for developing and operating Ground Data Systems (GDS) for flight mission support. This approach is driven by the goals of minimizing cost and maximizing customer satisfaction. Achievement of these goals is realized through the use of a standard set of capabilities which can be modified to meet specific user needs. This approach, which is called the Renaissance architecture, stresses the engineering of integrated systems, based upon workstation/local area network (LAN)/fileserver technology and reusable hardware and software components called 'building blocks.' These building blocks are integrated with mission specific capabilities to build the GDS for each individual mission. The building block approach is key to the reduction of development costs and schedules. Also, the Renaissance approach allows the integration of GDS functions that were previously provided via separate multi-mission facilities. With the Renaissance architecture, the GDS can be developed by the MO&DSD or all, or part, of the GDS can be operated by the user at their facility. Flexibility in operation configuration allows both selection of a cost-effective operations approach and the capability for customizing operations to user needs. Thus the focus of the MO&DSD is shifted from operating systems that we have built to building systems and, optionally, operations as separate services. Renaissance is actually a continuous process. Both the building blocks and the system architecture will evolve as user needs and technology change. Providing GDS on a per user basis enables this continuous refinement of the development process and product and allows the MO&DSD to remain a customer-focused organization. This paper will present the activities and results of the MO&DSD initial efforts toward the establishment of the Renaissance approach for the development of GDS, with a particular focus on both the technical and process implications posed by Renaissance to the MO&DSD.
Streamlining the Design Tradespace for Earth Imaging Constellations
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hughes, Steven P.; Le Moigne, Jacqueline J.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
HSI top-down requirements analysis for ship manpower reduction
NASA Astrophysics Data System (ADS)
Malone, Thomas B.; Bost, J. R.
2000-11-01
U.S. Navy ship acquisition programs such as DD 21 and CVNX are increasingly relying on top down requirements analysis (TDRA) to define and assess design approaches for workload and manpower reduction, and for ensuring required levels of human performance, reliability, safety, and quality of life at sea. The human systems integration (HSI) approach to TDRA begins with a function analysis which identifies the functions derived from the requirements in the Operational Requirements Document (ORD). The function analysis serves as the function baseline for the ship, and also supports the definition of RDT&E and Total Ownership Cost requirements. A mission analysis is then conducted to identify mission scenarios, again based on requirements in the ORD, and the Design Reference Mission (DRM). This is followed by a mission/function analysis which establishes the function requirements to successfully perform the ship's missions. Function requirements of major importance for HSI are information, performance, decision, and support requirements associated with each function. An allocation of functions defines the roles of humans and automation in performing the functions associated with a mission. Alternate design concepts, based on function allocation strategies, are then described, and task networks associated with the concepts are developed. Task network simulations are conducted to assess workloads and human performance capabilities associated with alternate concepts. An assessment of the affordability and risk associated with alternate concepts is performed, and manning estimates are developed for feasible design concepts.
Using personal glucose meters and functional DNA sensors to quantify a variety of analytical targets
Xiang, Yu; Lu, Yi
2012-01-01
Portable, low-cost and quantitative detection of a broad range of targets at home and in the field has the potential to revolutionize medical diagnostics and environmental monitoring. Despite many years of research, very few such devices are commercially available. Taking advantage of the wide availability and low cost of the pocket-sized personal glucose meter—used worldwide by diabetes sufferers—we demonstrate a method to use such meters to quantify non-glucose targets, ranging from a recreational drug (cocaine, 3.4 μM detection limit) to an important biological cofactor (adenosine, 18 μM detection limit), to a disease marker (interferon-gamma of tuberculosis, 2.6 nM detection limit) and a toxic metal ion (uranium, 9.1 nM detection limit). The method is based on the target-induced release of invertase from a functional-DNA–invertase conjugate. The released invertase converts sucrose into glucose, which is detectable using the meter. The approach should be easily applicable to the detection of many other targets through the use of suitable functional-DNA partners (aptamers DNAzymes or aptazymes). PMID:21860458
Lung tumor segmentation in PET images using graph cuts.
Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan
2013-03-01
The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Dikin-type algorithms for dextrous grasping force optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buss, M.; Faybusovich, L.; Moore, J.B.
1998-08-01
One of the central issues in dextrous robotic hand grasping is to balance external forces acting on the object and at the same time achieve grasp stability and minimum grasping effort. A companion paper shows that the nonlinear friction-force limit constraints on grasping forces are equivalent to the positive definiteness of a certain matrix subject to linear constraints. Further, compensation of the external object force is also a linear constraint on this matrix. Consequently, the task of grasping force optimization can be formulated as a problem with semidefinite constraints. In this paper, two versions of strictly convex cost functions, onemore » of them self-concordant, are considered. These are twice-continuously differentiable functions that tend to infinity at the boundary of possible definiteness. For the general class of such cost functions, Dikin-type algorithms are presented. It is shown that the proposed algorithms guarantee convergence to the unique solution of the semidefinite programming problem associated with dextrous grasping force optimization. Numerical examples demonstrate the simplicity of implementation, the good numerical properties, and the optimality of the approach.« less
NASA Astrophysics Data System (ADS)
Shipman, Joshua; Riggs, Brian; Luo, Sijun; Adireddy, Shiva; Chrisey, Douglas
Energy storage is a green energy technology, however it must be cost effective and scalable to meet future energy demands. Polymer-nanoparticle composites are low cost and potentially offer high energy storage. This is based on the high breakdown strength of polymers and the high dielectric constant of ceramic nanoparticles, but the incoherent nature of the interface between the two components prevents the realization of their combined full potential. We have created inkjet printable nanoparticle-polymer composites that have mitigated many of these interface effects, guided by first principle modelling of the interface. We detail density functional theory modelling of the interface and how it has guided our use in in specific surface functionalizations and other inorganic layers. We have validated our approach by using finite element analysis of the interface. By choosing the correct surface functionalization we are able to create dipole traps which further increase the breakdown strength of our composites. Our nano-scale understanding has allowed us to create the highest energy density composites currently available (>40 J/cm3).
Using personal glucose meters and functional DNA sensors to quantify a variety of analytical targets
NASA Astrophysics Data System (ADS)
Xiang, Yu; Lu, Yi
2011-09-01
Portable, low-cost and quantitative detection of a broad range of targets at home and in the field has the potential to revolutionize medical diagnostics and environmental monitoring. Despite many years of research, very few such devices are commercially available. Taking advantage of the wide availability and low cost of the pocket-sized personal glucose meter—used worldwide by diabetes sufferers—we demonstrate a method to use such meters to quantify non-glucose targets, ranging from a recreational drug (cocaine, 3.4 µM detection limit) to an important biological cofactor (adenosine, 18 µM detection limit), to a disease marker (interferon-gamma of tuberculosis, 2.6 nM detection limit) and a toxic metal ion (uranium, 9.1 nM detection limit). The method is based on the target-induced release of invertase from a functional-DNA-invertase conjugate. The released invertase converts sucrose into glucose, which is detectable using the meter. The approach should be easily applicable to the detection of many other targets through the use of suitable functional-DNA partners (aptamers, DNAzymes or aptazymes).
Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain
Osechinskiy, Sergey; Kruggel, Frithjof
2011-01-01
Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290
Analog "neuronal" networks in early vision.
Koch, C; Marroquin, J; Yuille, A
1986-01-01
Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172