On a cost functional for H2/H(infinity) minimization
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis
1990-01-01
A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.
NASA Technical Reports Server (NTRS)
Mclennan, G. A.
1986-01-01
This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.
Thermal treatment of the minority game
NASA Astrophysics Data System (ADS)
Burgos, E.; Ceva, Horacio; Perazzo, R. P.
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
Thermal treatment of the minority game.
Burgos, E; Ceva, Horacio; Perazzo, R P J
2002-03-01
We study a cost function for the aggregate behavior of all the agents involved in the minority game (MG) or the bar attendance model (BAM). The cost function allows us to define a deterministic, synchronous dynamic that yields results that have the main relevant features than those of the probabilistic, sequential dynamics used for the MG or the BAM. We define a temperature through a Langevin approach in terms of the fluctuations of the average attendance. We prove that the cost function is an extensive quantity that can play the role of an internal energy of the many-agent system while the temperature so defined is an intensive parameter. We compare the results of the thermal perturbation to the deterministic dynamics and prove that they agree with those obtained with the MG or BAM in the limit of very low temperature.
Space station needs, attributes and architectural options: Study summary
NASA Technical Reports Server (NTRS)
1983-01-01
Space station needs, attributes, and architectural options that affect the future implementation and design of a space station system are examined. Requirements for candidate missions are used to define functional attributes of a space station. Station elements that perform these functions form the basic station architecture. Alternative ways to accomplish these functions are defined and configuration concepts are developed and evaluated. Configuration analyses are carried to the point that budgetary cost estimates of alternate approaches could be made. Emphasis is placed on differential costs for station support elements and benefits that accrue through use of the station.
Disaster warning system study summary. [cost estimates using NOAA satellites
NASA Technical Reports Server (NTRS)
Leroy, B. F.; Maloy, J. E.; Braley, R. C.; Provencher, C. E.; Schumaker, H. A.; Valgora, M. E.
1977-01-01
A conceptual satellite system to replace or complement NOAA's data collection, internal communications, and public information dissemination systems for the mid-1980's was defined. Program cost and cost sensitivity to variations in communications functions are analyzed.
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Zapata, Edgar; Levack, Daniel J. H.; Robinson, John W.; Donahue, Benjamin B.
2009-01-01
Cost control must be implemented through the establishment of requirements and controlled continually by managing to these requirements. Cost control of the non-recurring side of life cycle cost has traditionally been implemented in both commercial and government programs. The government uses the budget process to implement this control. The commercial approach is to use a similar process of allocating the non-recurring cost to major elements of the program. This type of control generally manages through a work breakdown structure (WBS) by defining the major elements of the program. If the cost control is to be applied across the entire program life cycle cost (LCC), the approach must be addressed very differently. A functional breakdown structure (FBS) is defined and recommended. Use of a FBS provides the visibifity to allow the choice of an integrated solution reducing the cost of providing many different elements of like function. The different functional solutions that drive the hardware logistics, quantity of documentation, operational labor, reliability and maintainability balance, and total integration of the entire system from DDT&E through the life of the program must be fully defined, compared, and final decisions made among these competing solutions. The major drivers of recurring cost have been identified and are presented and discussed. The LCC requirements must be established and flowed down to provide control of LCC. This LCC control will require a structured rigid process similar to the one traditionally used to control weight/performance for space transportation systems throughout the entire program. It has been demonstrated over the last 30 years that without a firm requirement and methodically structured cost control, it is unlikely that affordable and sustainable space transportation system LCC will be achieved.
Optimal cost for strengthening or destroying a given network
NASA Astrophysics Data System (ADS)
Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo
2017-05-01
Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c (k ) , which is the cost of strengthening or destroying a node with degree k . Using the degrees k in a network and the cost function c (k ) , we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their pc, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.
Optimal cost for strengthening or destroying a given network.
Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo
2017-05-01
Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c(k), which is the cost of strengthening or destroying a node with degree k. Using the degrees k in a network and the cost function c(k), we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their p_{c}, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.
Noun-phrase anaphors and focus: the informational load hypothesis.
Almor, A
1999-10-01
The processing of noun-phrase (NP) anaphors in discourse is argued to reflect constraints on the activation and processing of semantic information in working memory. The proposed theory views NP anaphor processing as an optimization process that is based on the principle that processing cost, defined in terms of activating semantic information, should serve some discourse function--identifying the antecedent, adding new information, or both. In a series of 5 self-paced reading experiments, anaphors' functionality was manipulated by changing the discourse focus, and their cost was manipulated by changing the semantic relation between the anaphors and their antecedents. The results show that reading times of NP anaphors reflect their functional justification: Anaphors were read faster when their cost had a better functional justification. These results are incompatible with any theory that treats NP anaphors as one homogeneous class regardless of discourse function and processing cost.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.
2017-11-01
The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.
NASA Astrophysics Data System (ADS)
Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric
2017-01-01
Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.
Technology requirements for communication satellites in the 1980's
NASA Technical Reports Server (NTRS)
Burtt, J. E.; Moe, C. R.; Elms, R. V.; Delateur, L. A.; Sedlacek, W. C.; Younger, G. G.
1973-01-01
The key technology requirements are defined for meeting the forecasted demands for communication satellite services in the 1985 to 1995 time frame. Evaluation is made of needs for services and technical and functional requirements for providing services. The future growth capabilities of the terrestrial telephone network, cable television, and satellite networks are forecasted. The impact of spacecraft technology and booster performance and costs upon communication satellite costs are analyzed. Systems analysis techniques are used to determine functional requirements and the sensitivities of technology improvements for reducing the costs of meeting requirements. Recommended development plans and funding levels are presented, as well as the possible cost saving for communications satellites in the post 1985 era.
Elements of Designing for Cost
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Unal, Resit
1992-01-01
During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.
Elements of designing for cost
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Unal, Resit
1992-01-01
During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity-based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.
Toward a new payment system for inpatient rehabilitation. Part II: Reimbursing providers.
Saitto, Carlo; Marino, Claudia; Fusco, Danilo; Arcà, Massimo; Perucci, Carlo A
2005-09-01
The major fault with existing reimbursement systems lies in their failure to discriminate for the effectiveness of stay, both when paying per day and when paying per episode of treatment. We sought to define an average length of effective stay and recovery trends by impairment category, to design a prospective payment system that takes into account costs and expected recovery trends, and to compare the calculated reimbursement with the predicted costs estimated in a previous study (Saitto C, Marino C, Fusco D, et al. A new prospective payment system for inpatient rehabilitation. Part I: predicting resource consumption. Med Care. 2005;43:844-855). We considered all rehabilitation admissions from 5 Italian inpatient facilities during a 12-month period for which total cost of care had already been estimated and daily cost predicted through regression model. We ascertained recovery trends by impairment category through repeated MDS-PAC schedules and factorial analysis of functional status. We defined effective stay and daily resource consumption by impairment category and used these parameters to calculate reimbursement for the admission. We compared our reimbursement with predicted cost through regression analysis and evaluated the goodness of fit through residual analysis. We calculated reimbursement for 2079 admissions. The r(2) values for the reimbursement to cost correlation ranged from 0.54 in the whole population to 0.56 for "multiple trauma" to 0.85 for "other medical disorders." The best fit was found in the central quintiles of the cost and severity distributions. For each impairment category, we determined the number of days of effective hospital stay and the trends of functional gain. We demonstrated, at least within the Italian health care system, the feasibility of a reimbursement system that matches costs with functional recovery. By linking reimbursement to effective stay adjusted for trends of functional gain, we suggest it is possible to avoid both needless cuts and extensions of hospital admissions.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
Blumstein, Daniel T; Chung, Lawrance K; Smith, Jennifer E
2013-05-22
Play has been defined as apparently functionless behaviour, yet since play is costly, models of adaptive evolution predict that it should have some beneficial function (or functions) that outweigh its costs. We provide strong evidence for a long-standing, but poorly supported hypothesis: that early social play is practice for later dominance relationships. We calculated the relative dominance rank by observing the directional outcome of playful interactions in juvenile and yearling yellow-bellied marmots (Marmota flaviventris) and found that these rank relationships were correlated with later dominance ranks calculated from agonistic interactions, however, the strength of this relationship attenuated over time. While play may have multiple functions, one of them may be to establish later dominance relationships in a minimally costly way.
NASA Technical Reports Server (NTRS)
Csomor, A.; Nielson, C. E.
1989-01-01
This program will focus on the integration of all functional disciplines of the design, manufacturing, materials, fabrication and producibility to define and demonstrate a highly reliable, easily maintained, low cost liquid methane turbopump as a component for the STBE (Space Transportation Booster Engine) using the STME (main engine) oxygen turbopump. A cost model is to be developed to predict the recurring cost of production hardware and operations. A prime objective of the program is to design the liquid methane turbopump to be used in common with a LH2 turbopump optimized for the STME. Time phasing of the effort is presented and interrelationship of the tasks is defined. Major subcontractors are identified and their roles in the program are described.
Bhat; Bergstrom; Teasley; Bowker; Cordell
1998-01-01
/ This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model
Bantam System Technology Project
NASA Technical Reports Server (NTRS)
Moon, J. M.; Beveridge, J. R.
1998-01-01
This report focuses on determining a best value, low risk, low cost and highly reliable Data and Command System for support of the launch of low cost vehicles which are to carry small payloads into low earth orbit. The ground-based DCS is considered as a component of the overall ground and flight support system which includes the DCS, flight computer, mission planning system and simulator. Interfaces between the DCS and these other component systems are considered. Consideration is also given to the operational aspects of the mission and of the DCS selected. This project involved: defining requirements, defining an efficient operations concept, defining a DCS architecture which satisfies the requirements and concept, conducting a market survey of commercial and government off-the-shelf DCS candidate systems and rating the candidate systems against the requirements/concept. The primary conclusions are that several low cost, off-the-shelf DCS solutions exist and these can be employed to provide for very low cost operations and low recurring maintenance cost. The primary recommendation is that the DCS design/specification should be integrated within the ground and flight support system design as early as possible to ensure ease of interoperability and efficient allocation of automation functions among the component systems.
NASA Astrophysics Data System (ADS)
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
Defining and reconstructing clinical processes based on IHE and BPMN 2.0.
Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef
2011-01-01
This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.
The Economics of Cognitive Impairment: Volunteering and Cognitive Function in the HILDA Survey.
Hosking, Diane E; Anstey, Kaarin J
2016-01-01
The economic impact of older-age cognitive impairment has been estimated primarily by the direct and indirect costs associated with dementia care. Other potential costs associated with milder cognitive impairment in the community have received little attention. To quantify the cost of nonclinical cognitive impairment in a large population-based sample in order to more fully inform cost-effectiveness evaluations of interventions to maintain cognitive health. Volunteering by seniors has economic value but those with lower cognitive function may contribute fewer hours. Relations between hours volunteering and cognitive impairment were assessed using the Household, Income and Labour Dynamics in Australia (HILDA) survey data. These findings were extrapolated to the Australian population to estimate one potential cost attributable to nonclinical cognitive impairment. In those aged ≥60 years in HILDA (n = 3,127), conservatively defined cognitive impairment was present in 3.8% of the sample. Impairment was defined by performance ≥1 standard deviation below the age- and education-adjusted mean on both the Symbol Digit Modalities Test and Backwards Digit Span test. In fully adjusted binomial regression models, impairment was associated with the probability of undertaking 1 h 9 min less volunteering a week compared to being nonimpaired (β = -1.15, 95% confidence interval -1.82 to -0.47, p = 0.001). In the population, 3.8% impairment equated to probable loss of AUD 302,307,969 per annum estimated by hours of volunteering valued by replacement cost. Nonclinical cognitive impairment in older age impacts upon on the nonmonetary economy via probable loss of volunteering contribution. Valuing loss of contribution provides additional information for cost-effectiveness evaluations of research and action directed toward maintaining older-age cognitive functioning. © 2016 S. Karger AG, Basel.
Thaker, Nikhil G.; Pugh, Thomas J.; Mahmood, Usama; Choi, Seungtaek; Spinks, Tracy E.; Martin, Neil E.; Sio, Terence T.; Kudchadker, Rajat J.; Kaplan, Robert S.; Kuban, Deborah A.; Swanson, David A.; Orio, Peter F.; Zelefsky, Michael J.; Cox, Brett W.; Potters, Louis; Buchholz, Thomas A.; Feeley, Thomas W.; Frank, Steven J.
2017-01-01
PURPOSE Value, defined as outcomes over costs, has been proposed as a measure to evaluate prostate cancer (PCa) treatments. We analyzed standardized outcomes and time-driven activity-based costing (TDABC) for prostate brachytherapy (PBT) to define a value framework. METHODS AND MATERIALS Patients with low-risk PCa treated with low-dose rate PBT between 1998 and 2009 were included. Outcomes were recorded according to the International Consortium for Health Outcomes Measurement (ICHOM) standard set, which includes acute toxicity, patient-reported outcomes, and recurrence and survival outcomes. Patient-level costs to one year after PBT were collected using TDABC. Process mapping and radar chart analyses were conducted to visualize this value framework. RESULTS A total of 238 men were eligible for analysis. Median age was 64 (range, 46–81). Median follow-up was 5 years (0.5–12.1). There were no acute grade 3–5 complications. EPIC-50 scores were favorable, with no clinically significant changes from baseline to last follow-up at 48 months for urinary incontinence/bother, bowel bother, sexual function, and vitality. Ten-year outcomes were favorable, including biochemical failure-free survival of 84.1%, metastasis-free survival 99.6%, PCa-specific survival 100%, and overall survival 88.6%. TDABC analysis demonstrated low resource utilization for PBT, with 41% and 10% of costs occurring in the operating room and with the MRI scan, respectively. The radar chart allowed direct visualization of outcomes and costs. CONCLUSIONS We successfully created a visual framework to define the value of PBT using the ICHOM standard set and TDABC costs. PBT is associated with excellent outcomes and low costs. Widespread adoption of this methodology will enable value comparisons across providers, institutions, and treatment modalities. PMID:26916105
Performances of One-Round Walks in Linear Congestion Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Fanelli, Angelo; Flammini, Michele; Moscardelli, Luca
We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions {Stextsc{um}}, defined as the sum of the players’ costs, and {Mtextsc{ax}}, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function {Stextsc{um}} and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of 2+sqrt{5}≈ 4.24 given in [8] and the lower bound of 4 derived in [4] by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function {Mtextsc{ax}}, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of Θ(sqrt[4]{n^3}) (resp. Θ(nsqrt{n})), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.
Blumstein, Daniel T.; Chung, Lawrance K.; Smith, Jennifer E.
2013-01-01
Play has been defined as apparently functionless behaviour, yet since play is costly, models of adaptive evolution predict that it should have some beneficial function (or functions) that outweigh its costs. We provide strong evidence for a long-standing, but poorly supported hypothesis: that early social play is practice for later dominance relationships. We calculated the relative dominance rank by observing the directional outcome of playful interactions in juvenile and yearling yellow-bellied marmots (Marmota flaviventris) and found that these rank relationships were correlated with later dominance ranks calculated from agonistic interactions, however, the strength of this relationship attenuated over time. While play may have multiple functions, one of them may be to establish later dominance relationships in a minimally costly way. PMID:23536602
ERIC Educational Resources Information Center
Russell, Alene
2010-01-01
Outsourcing--defined as an "institution's decision to contract with an external organization to provide a traditional function or service" (IHEP, 2005)--is nothing new to higher education. For decades, institutions have been "contracting out" or "privatizing" a variety of operational functions, hoping to reduce costs,…
Gajana Bhat; John Bergsrom; R. Jeff Teasley
1998-01-01
This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such...
1985-07-01
environment in which it functions . A particular debt of gratitude in this respect is owed to my preceptor, Colonel James Helgeson, for his enthusiasm...patient conditions might run from heart attacks, which often receive intensive care and extensive follow-up support, to episodes of acute respiratory ...to include the following features: a. No disruption of existing IAS and UCA functions as defined by appropriate regulations b. Minimized additional
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.
1993-08-01
Military Health Services System (MHSS), continue to rise at an unacceptable rate. In an effort to curb rising costs, the Department of Defense has...Currently we spend $23,000 a second, more than $2 billion a day, and $733 billion a year on medical care (Castro, 1991). The cost of medical care in the...mandated that DoD pursue cost containment initiatives. Demonstration projects such as the Civilian Health and Medical Program of the Uniformed Services
USDA-ARS?s Scientific Manuscript database
Epigenetics has been defined as ‘the study of heritable changes in genome function that occur without a change in DNA sequence. Research on nutrigenomics, the genome-nutrient interface and epigenomics is in its infancy with respect to livestock species. Feed costs are the single greatest expense t...
A low cost LST pointing control system
NASA Technical Reports Server (NTRS)
Glaese, J. R.; Kennel, H. F.; Nurre, G. S.; Seltzer, S. M.; Shelton, H. L.
1975-01-01
Vigorous efforts to reduce costs, coupled with changes in LST guidelines, took place in the Fall of 1974. These events made a new design of the LST and its Pointing and Attitude Control System possible. The major design changes are summarized as: an annular Support Systems Module; removal of image motion compensation; reaction wheels instead of CMG's; a magnetic torquer system to also perform the emergency and backup functions, eliminating the previously required mass expulsion system. Preliminary analysis indicates the Low Cost LST concept can meet the newly defined requirements and results in a significantly reduced development cost.
Refurbishment cost study of the thermal protection system of a space shuttle vehicle, phase 2
NASA Technical Reports Server (NTRS)
Haas, D. W.
1972-01-01
The labor costs and techniques associated with the refurbishment and maintenance of representative thermal protection system (TPS) components and their attachment concepts suitable for space shuttle application are defined, characterized, and evaluated from the results of an experimental test program. This program consisted of designing selected TPS concepts, fabricating and assembling test hardware, and performing a time and motion study of specific maintenance functions of the test hardware on a full-scale- mockup. Labor requirements and refurbishment techniques, as they relate to the maintenance functions of inspection, repair, removal, and replacement were identified.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
Logistics as a Competitive Warfighting Advantage
2016-10-20
World Class Business Practices DoD Application 1 Focused on Core Functions Define and focus on core functions; Divest other...primes and lower cost timelines DLA Leadership Model – Aligning DLA leadership to business standards Defense Working Capital Fund - DLA...DLA’s leadership should incorporate private business sector structures – Leadership incentive structures are not reflective of private business best
Thaker, Nikhil G; Pugh, Thomas J; Mahmood, Usama; Choi, Seungtaek; Spinks, Tracy E; Martin, Neil E; Sio, Terence T; Kudchadker, Rajat J; Kaplan, Robert S; Kuban, Deborah A; Swanson, David A; Orio, Peter F; Zelefsky, Michael J; Cox, Brett W; Potters, Louis; Buchholz, Thomas A; Feeley, Thomas W; Frank, Steven J
2016-01-01
Value, defined as outcomes over costs, has been proposed as a measure to evaluate prostate cancer (PCa) treatments. We analyzed standardized outcomes and time-driven activity-based costing (TDABC) for prostate brachytherapy (PBT) to define a value framework. Patients with low-risk PCa treated with low-dose-rate PBT between 1998 and 2009 were included. Outcomes were recorded according to the International Consortium for Health Outcomes Measurement standard set, which includes acute toxicity, patient-reported outcomes, and recurrence and survival outcomes. Patient-level costs to 1 year after PBT were collected using TDABC. Process mapping and radar chart analyses were conducted to visualize this value framework. A total of 238 men were eligible for analysis. Median age was 64 (range, 46-81). Median followup was 5 years (0.5-12.1). There were no acute Grade 3-5 complications. Expanded Prostate Cancer Index Composite 50 scores were favorable, with no clinically significant changes from baseline to last followup at 48 months for urinary incontinence/bother, bowel bother, sexual function, and vitality. Ten-year outcomes were favorable, including biochemical failure-free survival of 84.1%, metastasis-free survival 99.6%, PCa-specific survival 100%, and overall survival 88.6%. TDABC analysis demonstrated low resource utilization for PBT, with 41% and 10% of costs occurring in the operating room and with the MRI scan, respectively. The radar chart allowed direct visualization of outcomes and costs. We successfully created a visual framework to define the value of PBT using the International Consortium for Health Outcomes Measurement standard set and TDABC costs. PBT is associated with excellent outcomes and low costs. Widespread adoption of this methodology will enable value comparisons across providers, institutions, and treatment modalities. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Jones, Roy W; Lebrec, Jeremie; Kahle-Wrobleski, Kristin; Dell'Agnello, Grazia; Bruno, Giuseppe; Vellas, Bruno; Argimon, Josep M; Dodel, Richard; Haro, Josep Maria; Wimo, Anders; Reed, Catherine
2017-01-01
We assessed whether cognitive and functional decline in community-dwelling patients with mild Alzheimer disease (AD) dementia were associated with increased societal costs and caregiver burden and time outcomes. Cognitive decline was defined as a ≥3-point reduction in the Mini-Mental State Examination and functional decline as a decrease in the ability to perform one or more basic items of the Alzheimer's Disease Cooperative Study Activities of Daily Living Inventory (ADCS-ADL) or ≥20% of instrumental ADL items. Total societal costs were estimated from resource use and caregiver hours using 2010 costs. Caregiver burden was assessed using the Zarit Burden Interview (ZBI); caregiver supervision and total hours were collected. Of 566 patients with mild AD enrolled in the GERAS study, 494 were suitable for the current analysis. Mean monthly total societal costs were greater for patients showing functional (+61%) or cognitive decline (+27%) compared with those without decline. In relation to a typical mean monthly cost of approximately EUR 1,400 at baseline, this translated into increases over 18 months to EUR 2,254 and 1,778 for patients with functional and cognitive decline, respectively. The number of patients requiring supervision doubled among patients showing functional or cognitive decline compared with those not showing decline, while caregiver total time increased by 70 and 33%, respectively and ZBI total score by 5.3 and 3.4 points, respectively. Cognitive and, more notably, functional decline were associated with increases in costs and caregiver outcomes in patients with mild AD dementia.
Hurley, M V; Walsh, N E; Mitchell, H; Nicholas, J; Patel, A
2012-02-01
Chronic joint pain is a major cause of pain and disability. Exercise and self-management have short-term benefits, but few studies follow participants for more than 6 months. We investigated the long-term (up to 30 months) clinical and cost effectiveness of a rehabilitation program combining self-management and exercise: Enabling Self-Management and Coping of Arthritic Knee Pain Through Exercise (ESCAPE-knee pain). In this pragmatic, cluster randomized, controlled trial, 418 people with chronic knee pain (recruited from 54 primary care surgeries) were randomized to usual care (pragmatic control) or the ESCAPE-knee pain program. The primary outcome was physical function (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC] function), with a clinically meaningful improvement in physical function defined as a ≥15% change from baseline. Secondary outcomes included pain, psychosocial and physiologic variables, costs, and cost effectiveness. Compared to usual care, ESCAPE-knee pain participants had large initial improvements in function (mean difference in WOMAC function -5.5; 95% confidence interval [95% CI] -7.8, -3.2). These improvements declined over time, but 30 months after completing the program, ESCAPE-knee pain participants still had better physical function (difference in WOMAC function -2.8; 95% CI -5.3, -0.2); lower community-based health care costs (£-47; 95% CI £-94, £-7), medication costs (£-16; 95% CI £-29, £-3), and total health and social care costs (£-1,118; 95% CI £-2,566, £-221); and a high probability (80-100%) of being cost effective. Clinical and cost benefits of ESCAPE-knee pain were still evident 30 months after completing the program. ESCAPE-knee pain is a more effective and efficient model of care that could substantially improve the health, well-being, and independence of many people, while reducing health care costs. Copyright © 2012 by the American College of Rheumatology.
The role of cost and response-efficacy in persuasiveness of health recommendations.
Cismaru, Magdalena; Nagpal, Anish; Krishnamurthy, Parthasarathy
2009-01-01
The persuasiveness of a health recommendation, among other things, is a function of the cost of engaging in the recommended behavior--such as money, time, effort, and discomfort--and the response-efficacy, defined as the likelihood that adherence to the recommendation would lead to the desired goal. This research investigates how cost and response-efficacy combine when influencing persuasion. Several theories of health behavior view cost and response-efficacy as having independent effects on persuasion, that is, a weighted additive impact. This research posits, and finds empirical support for the idea that cost and efficacy combine in a multiplicative fashion to influence persuasion, and suggests a structural modification to the traditional models of the relationship between cost, response-efficacy, and persuasion.
1979-12-01
used to reduce costs ). The orbital data from the prototype ion composi- tion telescope will not only be of great scientific interest -pro- viding for...active device whose transfer function may be almost arbitrarily defined, and cost and production trends permit contemplation of networks containing...developing solid-state television camera systems based on CCD imagers. RICA hopes to produce a $500 color camera for consumer use. Fairchild and Texas
Langley applications experiments data management system study. [for space shuttles
NASA Technical Reports Server (NTRS)
Lanham, C. C., Jr.
1975-01-01
A data management system study is presented that defines, in functional terms, the most cost effective ground data management system to support Advanced Technology Laboratory (ATL) flights of the space shuttle. Results from each subtask performed and the recommended system configuration for reformatting the experiment instrumentation tapes to computer compatible tape are examined. Included are cost factors for development of a mini control center for real-time support of the ATL flights.
A new procedure to analyze the effect of air changes in building energy consumption
2014-01-01
Background Today, the International Energy Agency is working under good practice guides that integrate appropriate and cost effective technologies. In this paper a new procedure to define building energy consumption in accordance with the ISO 13790 standard was performed and tested based on real data from a Spanish region. Results Results showed that the effect of air changes on building energy consumption can be defined using the Weibull peak function model. Furthermore, the effect of climate change on building energy consumption under several different air changes was nearly nil during the summer season. Conclusions The procedure obtained could be the much sought-after solution to the problem stated by researchers in the past and future research works relating to this new methodology could help us define the optimal improvement in real buildings to reduce energy consumption, and its related carbon dioxide emissions, at minimal economical cost. PMID:24456655
Principles of light harvesting from single photosynthetic complexes.
Schlau-Cohen, G S
2015-06-06
Photosynthetic systems harness sunlight to power most life on Earth. In the initial steps of photosynthetic light harvesting, absorbed energy is converted to chemical energy with near-unity quantum efficiency. This is achieved by an efficient, directional and regulated flow of energy through a network of proteins. Here, we discuss the following three key principles of this flow and of photosynthetic light harvesting: thermal fluctuations of the protein structure; intrinsic conformational switches with defined functional consequences; and environmentally triggered conformational switches. Through these principles, photosynthetic systems balance two types of operational costs: metabolic costs, or the cost of maintaining and running the molecular machinery, and opportunity costs, or the cost of losing any operational time. Understanding how the molecular machinery and dynamics are designed to balance these costs may provide a blueprint for improved artificial light-harvesting devices. With a multi-disciplinary approach combining knowledge of biology, this blueprint could lead to low-cost and more effective solar energy conversion. Photosynthetic systems achieve widespread light harvesting across the Earth's surface; in the face of our growing energy needs, this is functionality we need to replicate, and perhaps emulate.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Assessing hail risk for a building portfolio by generating stochastic events
NASA Astrophysics Data System (ADS)
Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie
2015-04-01
Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.
NASA Astrophysics Data System (ADS)
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
Structural Tailoring of Advanced Turboprops (STAT). Theoretical manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1992-01-01
This manual describes the theories in the Structural Tailoring of Advanced Turboprops (STAT) computer program, which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analyses include an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution (1-p) forced response life prediction capability. The STAT constraints include blade stresses, blade resonances, flutter, tip displacements, and a 1-P forced response life fraction. The STAT variables include all blade internal and external geometry parameters needed to define a composite material blade. The STAT objective function is dependent upon a blade baseline definition which the user supplies to describe a current blade design for cost optimization or for the tailoring of an aeroelastic scale model.
Structural Tailoring of Advanced Turboprops (STAT). Theoretical manual
NASA Astrophysics Data System (ADS)
Brown, K. W.
1992-10-01
This manual describes the theories in the Structural Tailoring of Advanced Turboprops (STAT) computer program, which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analyses include an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution (1-p) forced response life prediction capability. The STAT constraints include blade stresses, blade resonances, flutter, tip displacements, and a 1-P forced response life fraction. The STAT variables include all blade internal and external geometry parameters needed to define a composite material blade. The STAT objective function is dependent upon a blade baseline definition which the user supplies to describe a current blade design for cost optimization or for the tailoring of an aeroelastic scale model.
Orbit Clustering Based on Transfer Cost
NASA Technical Reports Server (NTRS)
Gustafson, Eric D.; Arrieta-Camacho, Juan J.; Petropoulos, Anastassios E.
2013-01-01
We propose using cluster analysis to perform quick screening for combinatorial global optimization problems. The key missing component currently preventing cluster analysis from use in this context is the lack of a useable metric function that defines the cost to transfer between two orbits. We study several proposed metrics and clustering algorithms, including k-means and the expectation maximization algorithm. We also show that proven heuristic methods such as the Q-law can be modified to work with cluster analysis.
Jones, Roy W.; Lebrec, Jeremie; Kahle-Wrobleski, Kristin; Dell'Agnello, Grazia; Bruno, Giuseppe; Vellas, Bruno; Argimon, Josep M.; Dodel, Richard; Haro, Josep Maria; Wimo, Anders; Reed, Catherine
2017-01-01
Background/Aims We assessed whether cognitive and functional decline in community-dwelling patients with mild Alzheimer disease (AD) dementia were associated with increased societal costs and caregiver burden and time outcomes. Methods Cognitive decline was defined as a ≥3-point reduction in the Mini-Mental State Examination and functional decline as a decrease in the ability to perform one or more basic items of the Alzheimer's Disease Cooperative Study Activities of Daily Living Inventory (ADCS-ADL) or ≥20% of instrumental ADL items. Total societal costs were estimated from resource use and caregiver hours using 2010 costs. Caregiver burden was assessed using the Zarit Burden Interview (ZBI); caregiver supervision and total hours were collected. Results Of 566 patients with mild AD enrolled in the GERAS study, 494 were suitable for the current analysis. Mean monthly total societal costs were greater for patients showing functional (+61%) or cognitive decline (+27%) compared with those without decline. In relation to a typical mean monthly cost of approximately EUR 1,400 at baseline, this translated into increases over 18 months to EUR 2,254 and 1,778 for patients with functional and cognitive decline, respectively. The number of patients requiring supervision doubled among patients showing functional or cognitive decline compared with those not showing decline, while caregiver total time increased by 70 and 33%, respectively and ZBI total score by 5.3 and 3.4 points, respectively. Conclusion Cognitive and, more notably, functional decline were associated with increases in costs and caregiver outcomes in patients with mild AD dementia. PMID:28611822
Alternate avionics system study and phase B extension
NASA Technical Reports Server (NTRS)
1971-01-01
Results of alternate avionics system studies for the space shuttle are presented that reduce the cost of vehicle avionics without incurring major off-setting costs on the ground. A comprehensive summary is provided of all configurations defined since the completion of the basic Phase B contract and a complete description of the optimized avionics baseline is given. In the new baseline, inflight redundancy management is performed onboard without ground support; utilization of off-the-shelf hardware reduces the cost figure substantially less than for the Phase B baseline. The only functional capability sacrificed in the new approach is automatic landing.
Reliability and Validity in Hospital Case-Mix Measurement
Pettengill, Julian; Vertrees, James
1982-01-01
There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909
National law enforcement telecommunications network
NASA Technical Reports Server (NTRS)
Reilly, N. B.; Garrison, G. W.; Sohn, R. L.; Gallop, D. L.; Goldstein, B. L.
1975-01-01
Alternative approaches are analyzed to a National Law Enforcement Telecommunications Network (NALECOM) designed to service all state-to-state and state-to-national criminal justice communications traffic needs in the United States. Network topology options were analyzed, and equipment and personnel requirements for each option were defined in accordance with NALECOM functional specifications and design guidelines. Evaluation criteria were developed and applied to each of the options leading to specific conclusions. Detailed treatments of methods for determining traffic requirements, communication line costs, switcher configurations and costs, microwave costs, satellite system configurations and costs, facilities, operations and engineering costs, network delay analysis and network availability analysis are presented. It is concluded that a single regional switcher configuration is the optimum choice based on cost and technical factors. A two-region configuration is competitive. Multiple-region configurations are less competitive due to increasing costs without attending benefits.
NASA Technical Reports Server (NTRS)
Devito, D. M.
1981-01-01
A low-cost GPS civil-user mobile terminal whose purchase cost is substantially an order of magnitude less than estimates for the military counterpart is considered with focus on ground station requirements for position monitoring of civil users requiring this capability and the civil user navigation and location-monitoring requirements. Existing survey literature was examined to ascertain the potential users of a low-cost NAVSTAR receiver and to estimate their number, function, and accuracy requirements. System concepts are defined for low cost user equipments for in-situ navigation and the retransmission of low data rate positioning data via a geostationary satellite to a central computing facility.
Chuang, Kenneth H; Covinsky, Kenneth E; Sands, Laura P; Fortinsky, Richard H; Palmer, Robert M; Landefeld, C Seth
2003-12-01
To determine whether hospital costs are higher in patients with lower functional status at admission, defined as dependence in one or more activities of daily living (ADLs), after adjustment for Medicare Diagnosis-Related Group (DRG) payments. Prospective study. General medical service at a teaching hospital. One thousand six hundred twelve patients aged 70 and older. The hospital cost of care for each patient was determined using a cost management information system, which allocates all hospital costs to individual patients. Hospital costs were higher in patients dependent in ADLs on admission than in patients independent in ADLs on admission ($5,300 vs $4,060, P<.01). Mean hospital costs remained higher in ADL-dependent patients than in ADL-independent patients in an analysis that adjusted for DRG weight ($5,240 vs $4,140, P<.01), and in multivariate analyses adjusting for age, race, sex, Charlson comorbidity score, acute physiology and chronic health evaluation score, and admission from a nursing home as well as for DRG weight ($5,200 vs $4,220, P<.01). This difference represents a 23% (95% confidence interval=15-32%) higher cost to take care of older dependent patients. Hospital cost is higher in patients with worse ADL function, even after adjusting for DRG payments. If this finding is true in other hospitals, DRG-based payments provide hospitals a financial incentive to avoid patients dependent in ADLs and disadvantage hospitals with more patients dependent in ADLs.
Luciano, Juan V; Forero, Carlos G; Cerdà-Lafont, Marta; Peñarrubia-María, María Teresa; Fernández-Vergel, Rita; Cuesta-Vargas, Antonio I; Ruíz, José M; Rozadilla-Sacanell, Antoni; Sirvent-Alierta, Elena; Santo-Panero, Pilar; García-Campayo, Javier; Serrano-Blanco, Antoni; Pérez-Aranda, Adrián; Rubio-Valera, María
2016-10-01
Although fibromyalgia syndrome (FM) is considered a heterogeneous condition, there is no generally accepted subgroup typology. We used hierarchical cluster analysis and latent profile analysis to replicate Giesecke's classification in Spanish FM patients. The second aim was to examine whether the subgroups differed in sociodemographic characteristics, functional status, quality of life, and in direct and indirect costs. A total of 160 FM patients completed the following measures for cluster derivation: the Center for Epidemiological Studies-Depression Scale, the Trait Anxiety Inventory, the Pain Catastrophizing Scale, and the Control over Pain subscale. Pain threshold was measured with a sphygmomanometer. In addition, the Fibromyalgia Impact Questionnaire-Revised, the EuroQoL-5D-3L, and the Client Service Receipt Inventory were administered for cluster validation. Two distinct clusters were identified using hierarchical cluster analysis ("hypersensitive" group, 69.8% and "functional" group, 30.2%). In contrast, the latent profile analysis goodness-of-fit indices supported the existence of 3 FM patient profiles: (1) a "functional" profile (28.1%) defined as moderate tenderness, distress, and pain catastrophizing; (2) a "dysfunctional" profile (45.6%) defined by elevated tenderness, distress, and pain catastrophizing; and (3) a "highly dysfunctional and distressed" profile (26.3%) characterized by elevated tenderness and extremely high distress and catastrophizing. We did not find significant differences in sociodemographic characteristics between the 2 clusters or among the 3 profiles. The functional profile was associated with less impairment, greater quality of life, and lower health care costs. We identified 3 distinct profiles which accounted for the heterogeneity of FM patients. Our findings might help to design tailored interventions for FM patients.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Market Mechanism Design for Renewable Energy based on Risk Theory
NASA Astrophysics Data System (ADS)
Yang, Wu; Bo, Wang; Jichun, Liu; Wenjiao, Zai; Pingliang, Zeng; Haobo, Shi
2018-02-01
Generation trading between renewable energy and thermal power is an efficient market means for transforming supply structure of electric power into sustainable development pattern. But the trading is hampered by the output fluctuations of renewable energy and the cost differences between renewable energy and thermal power at present. In this paper, the external environmental cost (EEC) is defined and the EEC is introduced into the generation cost. At same time, the incentive functions of renewable energy and low-emission thermal power are designed, which are decreasing functions of EEC. On these bases, for the market risks caused by the random variability of EEC, the decision-making model of generation trading between renewable energy and thermal power is constructed according to the risk theory. The feasibility and effectiveness of the proposed model are verified by simulation results.
Programmers and Dissolve Controls for Multi-Image Presentations.
ERIC Educational Resources Information Center
Benedict, Joel A.; Fuller, Barry J.
For audiovisual personnel planning multi-image presentations, a programer is suggested and its purpose and functions explained. Digital, frequency and punched-taped programers are defined and discussed, and approximate costs given. Methods of operating are described, and the possible tie-in of a dissolve unit is discussed. Equipment hookups are…
47 CFR 54.101 - Supported services for rural, insular and high cost areas.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Dual tone multi-frequency signaling or its functional equivalent. “Dual tone multi-frequency” (DTMF) is a method of signaling that facilitates the transportation of signaling through the network... systems; (6) Access to operator services. “Access to operator services” is defined as access to any...
NASA Technical Reports Server (NTRS)
1975-01-01
The results of the third and final phase of a study undertaken to define means of optimizing the Spacelab experiment data system by interactively manipulating the flow of data were presented. A number of payload applicable interactive techniques and an integrated interaction system for each of two possible payloads are described. These interaction systems have been functionally defined and are accompanied with block diagrams, hardware specifications, software sizing and speed requirements, operational procedures and cost/benefits analysis data for both onboard and ground based system elements. It is shown that accrued benefits are attributable to a reduction in data processing costs obtained by, generally, a considerable reduction in the quantity of data that might otherwise be generated without interaction. One other additional anticipated benefit includes the increased scientific value obtained by the quicker return of all useful data.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
NASA Technical Reports Server (NTRS)
Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.
1972-01-01
A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.
Seniors, risk and rehabilitation: broadening our thinking.
Egan, Mary Y; Laliberte Rudman, Debbie; Ceci, Christine; Kessler, Dorothy; McGrath, Colleen; Gardner, Paula; King, Judy; Lanoix, Monique; Malhotra, Ravi
2017-06-01
Conceptualizations of risk in seniors' rehabilitation emphasize potential physical injury, functional independence and cost containment, shifting rehabilitation from other considerations essential to promoting a satisfying life. In a two-day multidisciplinary planning meeting we critically examined and discussed alternatives to dominant conceptualizations. Invitees reflected on conceptualizations of risk in stroke rehabilitation and low vision rehabilitation, identified and explored positive and negative implications and generated alternative perspectives to support rehabilitation approaches related to living a good life. Current risk conceptualizations help focus rehabilitation teamwork and make this work publically recognizable and valued. However, they also lead to practice that is depersonalized, decontextualized and restrictive. Further research and practice development initiatives should include the voices of clinicians and seniors to more adequately support meaningfully living, and foster safe spaces for seniors and clinicians to speak candidly, comprehensively and respectfully about risk. To ensure that seniors' rehabilitation targets a satisfying life as defined by seniors, increased focus on the environment and more explicit examination of how cost containment concerns are driving services is also necessary. This work reinforced current concerns about conceptualizations of risk in seniors' rehabilitation and generated ways forward that re-focus rehabilitation more on promoting a satisfying life. Implications for rehabilitation In seniors' rehabilitation, considerations of risk focus on physical injury, functional dependence and cost containment. Focus on provider-defined risk of physical injury limits examination of patient goals and patients' histories of judging and dealing with risk. Focus on functional dependence and cost containment may lead to practice that is depersonalized and decontextualized. Abandonment of ableist and ageist thinking and an explicit focus on person-centered definitions of risk and a satisfying life are recommended.
Study for analysis of benefit versus cost of low thrust propulsion system
NASA Technical Reports Server (NTRS)
Hamlyn, K. M.; Robertson, R. I.; Rose, L. J.
1983-01-01
The benefits and costs associated with placing large space systems (LSS) in operational orbits were investigated, and a flexible computer model for analyzing these benefits and costs was developed. A mission model for LSS was identified that included both NASA/Commercial and DOD missions. This model included a total of 68 STS launches for the NASA/Commercial missions and 202 launches for the DOD missions. The mission catalog was of sufficient depth to define the structure type, mass and acceleration limits of each LSS. Conceptual primary propulsion stages (PPS) designs for orbital transfer were developed for three low thrust LO2/LH2 engines baselined for the study. The performance characteristics for each of these PPS was compared to the LSS mission catalog to create a mission capture. The costs involved in placing the LSS in their operational orbits were identified. The two primary costs were that of the PPS and of the STS launch. The cost of the LSS was not included as it is not a function of the PPS performance. The basic relationships and algorithms that could be used to describe the costs were established. The benefit criteria for the mission model were also defined. These included mission capture, reliability, technical risk, development time, and growth potential. Rating guidelines were established for each parameter. For flexibility, each parameter is assigned a weighting factor.
A planning model for the short-term management of cash.
Broyles, Robert W; Mattachione, Steven; Khaliq, Amir
2011-02-01
This paper develops a model that enables the health administrator to identify the balance that minimizes the projected cost of holding cash. Adopting the principles of mathematical expectation, the model estimates the expected total costs of adopting each of the several strategies concerning the cash balance that the organization might maintain. Expected total costs consist of anticipated short costs, resulting from a potential shortage of funds. Long costs are associated with a potential surplus of funds and an opportunity cost represented by foregone investment income. Of importance to the model is the potential for the health service organization to realize a surplus of funds during periods characterized by a net cash disbursement. The paper also develops an interactive spreadsheet that enables the administrator to perform sensitivity analysis and examine the response of the desired or target cash balance to changes in the parameters that define the expected long and short cost functions.
Contracting for intensive care services.
Dorman, S
1996-01-01
Purchasers will increasingly expect clinical services in the NHS internal market to provide objective measures of their benefits and cost effectiveness in order to maintain or develop current funding levels. There is limited scientific evidence to demonstrate the clinical effectiveness of intensive care services in terms of mortality/morbidity. Intensive care is a high-cost service and studies of cost-effectiveness need to take account of case-mix variations, differences in admission and discharge policies, and other differences between units. Decisions over development or rationalisation of intensive care services should be based on proper outcome studies of well defined patient groups. The purchasing function itself requires development in order to support effective contracting.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Zero Base Budgeting: A New Planning Tool for New Colleges.
ERIC Educational Resources Information Center
Adamson, Willie D.
Zero-base budgeting is presented as the functional alternative to the community college funding crisis which may be precipitated by passage in June 1978 of the Jarvis Amendment (Proposition 13) in California. Defined as the management of scarce resources on a cost/benefit basis to achieve pre-determined goals, zero-base budgeting emphasizes…
Management Information System for ESD Program Offices.
1978-03-01
Management Information System (MIS) functional requirements for the ESD Program Office are defined in terms of the Computer-Aided Design and Specification Tool. The development of the computer data base and a description of the MIS structure is included in the report. This report addresses management areas such as cost/budgeting, scheduling, tracking capabilities, and ECP
A Software Defined Radio Based Airplane Communication Navigation Simulation System
NASA Astrophysics Data System (ADS)
He, L.; Zhong, H. T.; Song, D.
2018-01-01
Radio communication and navigation system plays important role in ensuring the safety of civil airplane in flight. Function and performance should be tested before these systems are installed on-board. Conventionally, a set of transmitter and receiver are needed for each system, thus all the equipment occupy a lot of space and are high cost. In this paper, software defined radio technology is applied to design a common hardware communication and navigation ground simulation system, which can host multiple airplane systems with different operating frequency, such as HF, VHF, VOR, ILS, ADF, etc. We use a broadband analog frontend hardware platform, universal software radio peripheral (USRP), to transmit/receive signal of different frequency band. Software is compiled by LabVIEW on computer, which interfaces with USRP through Ethernet, and is responsible for communication and navigation signal processing and system control. An integrated testing system is established to perform functional test and performance verification of the simulation signal, which demonstrate the feasibility of our design. The system is a low-cost and common hardware platform for multiple airplane systems, which provide helpful reference for integrated avionics design.
New approach in the evaluation of a fitness program at a worksite.
Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T
1999-03-01
The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.
NASA Technical Reports Server (NTRS)
Jackson, Mark Charles
1994-01-01
Spacecraft proximity operations are complicated by the fact that exhaust plume impingement from the reaction control jets of space vehicles can cause structural damage, contamination of sensitive arrays and instruments, or attitude misalignment during docking. The occurrence and effect of jet plume impingement can be reduced by planning approach trajectories with plume effects considered. An A* node search is used to find plume-fuel optimal trajectories through a discretized six dimensional attitude-translation space. A plume cost function which approximates jet plume isopressure envelopes is presented. The function is then applied to find relative costs for predictable 'trajectory altering' firings and unpredictable 'deadbanding' firings. Trajectory altering firings are calculated by running the spacecraft jet selection algorithm and summing the cost contribution from each jet fired. A 'deadbanding effects' function is defined and integrated to determine the potential for deadbanding impingement along candidate trajectories. Plume costs are weighed against fuel costs in finding the optimal solution. A* convergence speed is improved by solving approach trajectory problems in reverse time. Results are obtained on a high fidelity space shuttle/space station simulation. Trajectory following is accomplished by a six degree of freedom autopilot. Trajectories planned with, and without, plume costs are compared in terms of force applied to the target structure.
Noben, Cindy; Smit, Filip; Nieuwenhuijsen, Karen; Ketelaar, Sarah; Gärtner, Fania; Boon, Brigitte; Sluiter, Judith; Evers, Silvia
2014-10-01
The specific job demands of working in a hospital may place nurses at elevated risk for developing distress, anxiety and depression. Screening followed by referral to early interventions may reduce the incidence of these health problems and promote work functioning. To evaluate the comparative cost-effectiveness of two strategies to promote work functioning among nurses by reducing symptoms of mental health complaints. Three conditions were compared: the control condition consisted of online screening for mental health problems without feedback about the screening results. The occupational physician condition consisted of screening, feedback and referral to the occupational physician for screen-positive nurses. The third condition included screening, feedback, and referral to e-mental health. The study was designed as an economic evaluation alongside a pragmatic cluster randomised controlled trial with randomisation at hospital-ward level. The study included 617 nurses in one academic medical centre in the Netherlands. Treatment response was defined as an improvement on the Nurses Work Functioning Questionnaire of at least 40% between baseline and follow-up. Total per-participant costs encompassed intervention costs, direct medical and non-medical costs, and indirect costs stemming from lost productivity due to absenteeism and presenteeism. All costs were indexed for the year 2011. At 6 months follow-up, significant improvement in work functioning occurred in 20%, 24% and 16% of the participating nurses in the control condition, the occupational physician condition and the e-mental health condition, respectively. In these conditions the total average annualised costs were €1752, €1266 and €1375 per nurse. The median incremental cost-effectiveness ratio for the occupational physician condition versus the control condition was dominant, suggesting cost savings of €5049 per treatment responder. The incremental cost-effectiveness ratio for the e-mental health condition versus the control condition was estimated at €4054 (added costs) per treatment responder. Sensitivity analyses attested to the robustness of these findings. The occupational physician condition resulted in greater treatment responses for less costs relative to the control condition and can therefore be recommended. The e-mental health condition produced less treatment response than the control condition and cannot be recommended as an intervention to improve work functioning among nurses. Copyright © 2014 Elsevier Ltd. All rights reserved.
Space station needs, attributes and architectural options: Architectural options and selection
NASA Technical Reports Server (NTRS)
Nelson, W. G.
1983-01-01
The approach, study results, and recommendations for defining and selecting space station architectural options are described. Space station system architecture is defined as the arrangement of elements (manned and unmanned on-orbit facilities, shuttle vehicles, orbital transfer vehicles, etc.), the number of these elements, their location (orbital inclination and altitude, and their functional performance capability, power, volume, crew, etc.). Architectural options are evaluated based on the degree of mission capture versus cost and required funding rate. Mission capture refers to the number of missions accommodated by the particular architecture.
Reusable Agena study. Volume 2: Technical
NASA Technical Reports Server (NTRS)
Carter, W. K.; Piper, J. E.; Douglass, D. A.; Waller, E. W.; Hopkins, C. V.; Fitzgerald, E. T.; Sagawa, S. S.; Carter, S. A.; Jensen, H. L.
1974-01-01
The application of the existing Agena vehicle as a reusable upper stage for the space shuttle is discussed. The primary objective of the study is to define those changes to the Agena required for it to function in the reusable mode in the 100 percent capture of the NASA-DOD mission model. This 100 percent capture is achieved without use of kick motors or stages by simply increasing the Agena propellant load by using optional strap-on-tanks. The required shuttle support equipment, launch and flight operations techniques, development program, and cost package are also defined.
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Conceptual design studies of control and instrumentation systems for ignition experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, P.J.; Dewolf, J.B.; Heinemann, P.C.
1978-03-01
Studies at the Charles Stark Draper Laboratory in the past year were a continuation of prior studies of control and instrumentation systems for current and next generation Tokomaks. Specifically, the FY 77 effort has focused on the following two main efforts: (1) control requirements--(a) defining and evolving control requirements/concepts for a prototype experimental power reactor(s), and (b) defining control requirements for diverters and mirror machines, specifically the MX; and (2) defining requirements and scoping design for a functional control simulator. Later in the year, a small additional task was added: (3) providing analysis and design support to INESCO for itsmore » low cost fusion power system, FPC/DMT.« less
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis
NASA Technical Reports Server (NTRS)
Post, John Theodore
1989-01-01
When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.
ERIC Educational Resources Information Center
Hope, Nicholas C.
Arguing that the benefits from borrowing abroad exceed the costs recently imposed on countries through debt-servicing difficulties, this paper defines debt as an engine of growth, forcing the borrower to produce goods efficiently, export them, and function competitively in the international market. Debt-servicing difficulties of developing nations…
Reliability enhancement through optimal burn-in
NASA Astrophysics Data System (ADS)
Kuo, W.
1984-06-01
A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Satellite services system analysis study. Volume 1, part 2: Executive summary
NASA Technical Reports Server (NTRS)
1981-01-01
The early mission model was developed through a survey of the potential user market. Service functions were defined and a group of design reference missions were selected which represented needs for each of the service functions. Servicing concepts were developed through mission analysis and STS timeline constraint analysis. The hardware needs for accomplishing the service functions were identified with emphasis being placed on applying equipment in the current NASA inventory and that in advanced stages of planning. A more comprehensive service model was developed based on the NASA and DoD mission models segregated by mission class. The number of service events of each class were estimated based on average revisit and service assumptions. Service Kits were defined as collections of equipment applicable to performing one or more service functions. Preliminary design was carrid out on a selected set of hardware needed for early service missions. The organization and costing of the satellie service systems were addressed.
Uncertainty in sample estimates and the implicit loss function for soil information.
NASA Astrophysics Data System (ADS)
Lark, Murray
2015-04-01
One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Porath, Avi; Fund, Naama; Maor, Yasmin
2017-02-01
The aim of this study was to evaluate the direct costs of patients with diabetes ensured in a large health maintenance organization, Maccabi Health Services (MHS), in order to compare the medical costs of these patients to the medical costs of other patients insured by MHS and to assess the impact of poorly controlled diabetes on medical costs. A retrospective analysis of patients insured in MHS during 2012 was performed. Data were extracted automatically from the electronic database. A glycated hemoglobin (HbA1c) level of >9% (75 mmol/mol) was considered to define poorly controlled diabetes, and that of <7% (53 mmol/mol) and <8% (64 mmol/mol) to define controlled diabetes for patients aged <75 and ≥75 years, respectively. Multivariate analysis analyses were done to assess factors affecting cost. Data on a total of 99,017 patients with diabetes were obtained from the MHS database for 2012. Of these, 54% were male and 72% were aged 45-75 years. The median annual cost of treating diabetes was 4420 cost units (CU), with hospitalization accounting for 56% of the total costs. The median annual cost per patient in the age groups 35-44 and 75-84 years was 2836 CU and 7033 CU, respectively. Differences between costs for patients with diabetes and those for patients without diabetes was 85% for the age group 45-54 years but only 24% for the age group 75-84 years. Medical costs increased similarly with age for patients with controlled diabetes and those with poorly controlled diabetes costs, as did additional co-morbidities. Costs were significantly impacted by kidney disease. The costs for patients with an HbA1c level of 8.0-8.99% (64-74 mmol/mol) and 9.0-9.99% (75-85 mmol/mol) were 5722 and 5700 CU, respectively. In a multivariate analysis the factors affecting all patients' costs were HbA1C level, male gender, chronic diseases, complications of diabetes, disease duration, and stage of kidney function. The direct medical costs of patients with diabetes were significantly higher than those of patients without diabetes. The main drivers of these higher costs were hospitalizations and renal function. In poorly controlled patients the effect of HbA1c on costs was limited. These findings suggest that it is cost effective to identify patients with diabetes early in the course of the disease. The work was sponsored by internal funds of the authors. Article processing charges for this study was funded by Novo Nordisk.
Space station systems analysis study. Part 2, Volume 2. [technical report
NASA Technical Reports Server (NTRS)
1977-01-01
Specific system options are defined and identified for a cost effective space station capable of orderly growth with regard to both function and orbit location. Selected program options are analyzed and configuration concepts are developed to meet objectives for the satellite power system, earth servicing, space processing, and supporting activities. Transportation systems are analyzed for both LEO and GEO orbits.
Space Station overall management approach for operations
NASA Technical Reports Server (NTRS)
Paules, G.
1986-01-01
An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined.
NASA Technical Reports Server (NTRS)
1974-01-01
The definition and integration tasks involved in the development of design concepts for a carry-on laboratory (COL), to be compatible with Spacelab operations, were divided into the following study areas: (1) identification of research and equipment requirements of the COL; (2) development of a number of conceptual layouts for COL based on the defined research of final conceptual designs; and (4) development of COL planning information for definition of COL/Spacelab interface data, cost data, and program cost schedules, including design drawings of a selected COL to permit fabrication of a functional breadboard.
Definition of avionics concepts for a heavy lift cargo vehicle, volume 2
NASA Technical Reports Server (NTRS)
1989-01-01
A cost effective, multiuser simulation, test, and demonstration facility to support the development of avionics systems for future space vehicles is defined. The technology needs and requirements of future Heavy Lift Cargo Vehicles (HLCVs) are analyzed and serve as the basis for sizing of the avionics facility although the lab is not limited in use to support of HLCVs. Volume 2 is the technical volume and provides the results of the vehicle avionics trade studies, the avionics lab objectives, the lab's functional requirements and design, physical facility considerations, and a summary cost estimate.
The technological raw material heating furnaces operation efficiency improving issue
NASA Astrophysics Data System (ADS)
Paramonov, A. M.
2017-08-01
The issue of fuel oil applying efficiency improving in the technological raw material heating furnaces by means of its combustion intensification is considered in the paper. The technical and economic optimization problem of the fuel oil heating before combustion is solved. The fuel oil heating optimal temperature defining method and algorithm analytically considering the correlation of thermal, operating parameters and discounted costs for the heating furnace were developed. The obtained optimization functionality provides the heating furnace appropriate thermal indices achievement at minimum discounted costs. The carried out research results prove the expediency of the proposed solutions using.
Wiedel, Anna-Paulina; Norlund, Anders; Petrén, Sofia; Bondemark, Lars
2016-04-01
Economic evaluations provide an important basis for allocation of resources and health services planning. The aim of this study was to evaluate and compare the costs of correcting anterior crossbite with functional shift, using fixed or removable appliances (FA or RA) and to relate the costs to the effects, using cost-minimization analysis. Sixty-two patients with anterior crossbite and functional shift were randomized in blocks of 10. Thirty-one patients were randomized to be treated with brackets and arch wire (FA) and 31 with an acrylic plate (RA). Duration of treatment and number and estimated length of appointments and cancellations were registered. Direct costs (premises, staff salaries, material, and laboratory costs) and indirect costs (the accompanying parents' loss of income while absent from work) were calculated and evaluated with reference to successful outcome alone, to successful and unsuccessful outcomes and to re-treatment when required. Societal costs were defined as the sum of direct and indirect costs. Treatment with FA or RA. There were no significant differences between FA and RA with respect to direct costs for treatment time, but both indirect costs and direct costs for material were significantly lower for FA. The total societal costs were lower for FA than for RA. Costs depend on local factors and should not be directly extrapolated to other locations. The analysis disclosed significant economic benefits for FA over RA. Even when only successful outcomes were assessed, treatment with RA was more expensive. This trial was not registered. The protocol was not published before trial commencement. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A cost minimization analysis of early correction of anterior crossbite—a randomized controlled trial
Norlund, Anders; Petrén, Sofia; Bondemark, Lars
2016-01-01
Summary Objective: Economic evaluations provide an important basis for allocation of resources and health services planning. The aim of this study was to evaluate and compare the costs of correcting anterior crossbite with functional shift, using fixed or removable appliances (FA or RA) and to relate the costs to the effects, using cost-minimization analysis. Design, Setting, and Participants: Sixty-two patients with anterior crossbite and functional shift were randomized in blocks of 10. Thirty-one patients were randomized to be treated with brackets and arch wire (FA) and 31 with an acrylic plate (RA). Duration of treatment and number and estimated length of appointments and cancellations were registered. Direct costs (premises, staff salaries, material, and laboratory costs) and indirect costs (the accompanying parents’ loss of income while absent from work) were calculated and evaluated with reference to successful outcome alone, to successful and unsuccessful outcomes and to re-treatment when required. Societal costs were defined as the sum of direct and indirect costs. Interventions: Treatment with FA or RA. Results: There were no significant differences between FA and RA with respect to direct costs for treatment time, but both indirect costs and direct costs for material were significantly lower for FA. The total societal costs were lower for FA than for RA. Limitations: Costs depend on local factors and should not be directly extrapolated to other locations. Conclusion: The analysis disclosed significant economic benefits for FA over RA. Even when only successful outcomes were assessed, treatment with RA was more expensive. Trial registration: This trial was not registered. Protocol: The protocol was not published before trial commencement. PMID:25940585
Sensitivity analysis of physiological factors in space habitat design
NASA Technical Reports Server (NTRS)
Billingham, J.
1982-01-01
The costs incurred by design conservatism in space habitat design are discussed from a structural standpoint, and areas of physiological research into less than earth-normal conditions that offer the greatest potential decrease in habitat construction and operating costs are studied. The established range of human tolerance limits is defined for those physiological conditions which directly affect habitat structural design. These entire ranges or portions thereof are set as habitat design constraints as a function of habitat population and degree of ecological closure. Calculations are performed to determine the structural weight and cost associated with each discrete population size and its selected environmental conditions, on the basis of habitable volume equivalence for four basic habitat configurations: sphere, cylinder with hemispherical ends, torus, and crystal palace.
Christensen, Michael Cronquist; Munro, Vicki
2018-04-01
To determine the cost-effectiveness of vortioxetine vs duloxetine in adults with moderate-to-severe major depressive disorder (MDD) in Norway using a definition of a successfully treated patient (STP) that incorporates improvement in both mood symptoms and functional capacity. Using the population of patients who completed the 8-week CONNECT study, the cost-effectiveness of vortioxetine (n = 168) (10-20 mg/day) vs duloxetine (n = 176) (60 mg/day) was investigated for the treatment of adults in Norway with moderate-to-severe MDD and self-reported cognitive dysfunction over an 8-week treatment period. Cost-effectiveness was assessed in terms of cost per STP, defined as improvement in mood symptoms (≥50% decrease from baseline in Montgomery-Åsberg Depression Rating Scale total score) and change in UCSD [University of California San Diego] performance-based skills assessment [UPSA] score of ≥7. The base case analysis utilized pharmacy retail price (apotek utsalgspris (AUP)) for branded vortioxetine (Brintellix) and branded duloxetine (Cymbalta). After 8 weeks of antidepressant therapy, there were more STPs with vortioxetine than with duloxetine (27.4% vs 22.5%, respectively). The mean number needed to treat for each STP was 3.6 for vortioxetine and 4.4 for duloxetine, resulting in a lower mean cost per STP for vortioxetine (NOK [Norwegian Kroner] 3264) than for duloxetine (NOK 3310) and an incremental cost per STP of NOK 3051. The use of a more challenging change in the UPSA score from baseline (≥9) resulted in a mean cost per STP of NOK 3822 for vortioxetine compared with NOK 3983 for duloxetine and an incremental cost per STP of NOK 3181. Vortioxetine may be a cost-effective alternative to duloxetine, owing to its superior ability to improve functional capacity. The dual-response STP concept introduced here represents a more comprehensive analysis of the cost-effectiveness of antidepressants.
Environmental Liabilities: DoD Training Range Cleanup Cost Estimates Are Likely Understated
2001-04-01
1Federal accounting standards define environmental cleanup costs as...report will not be complete or accurate. Federal financial accounting standards have required that DOD report a liability for the estimated cost of...within the range is better than any other amount. SFFAS No. 6, Accounting for Property, Plant, and Equipment, further defines cleanup costs as costs for
Hartzler, Robert U; Steen, Brandon M; Hussey, Michael M; Cusick, Michael C; Cottrell, Benjamin J; Clark, Rachel E; Frankle, Mark A
2015-11-01
Some patients unexpectedly have poor functional improvement after reverse shoulder arthroplasty (RSA) for massive rotator cuff tear without glenohumeral arthritis. Our aim was to identify risk factors for this outcome. We also assessed the value of RSA for cases with poor functional improvement vs. The study was a retrospective case-control analysis for primary RSA performed for massive rotator cuff tear without glenohumeral arthritis with minimum 2-year follow-up. Cases were defined as Simple Shoulder Test (SST) score improvement of ≤1, whereas controls improved SST score ≥2. Risk factors were chosen on the basis of previous association with poor outcomes after shoulder arthroplasty. Latissimus dorsi tendon transfer results were analyzed as a subgroup. Value was defined as improvement in American Shoulder and Elbow Surgeons (ASES) score per $10,000 hospital cost. In a multivariate binomial logistic regression analysis, neurologic dysfunction (P = .006), age <60 years (P = .02), and high preoperative SST score (P = .03) were independently associated with poor functional improvement. Latissimus dorsi tendon transfer patients significantly improved in active external rotation (-0.3° to 38.7°; P < .01). The value of RSA (ΔASES/$10,000 cost) for cases was 0.8 compared with 17.5 for controls (P < .0001). Young age, high preoperative function, and neurologic dysfunction were associated with poor functional improvement. Surgeons should consider these associations in counseling and selection of patients. Concurrent latissimus dorsi transfer was successful in restoring active external rotation in a subgroup of patients. The critical economic importance of improved patient selection is emphasized by the very low value of the procedure in the case group. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.
2010-11-01
A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-09-16
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-01-01
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot’s pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area. PMID:26389914
Applications and requirements for real-time simulators in ground-test facilities
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Blech, Richard A.
1986-01-01
This report relates simulator functions and capabilities to the operation of ground test facilities, in general. The potential benefits of having a simulator are described to aid in the selection of desired applications for a specific facility. Configuration options for integrating a simulator into the facility control system are discussed, and a logical approach to configuration selection based on desired applications is presented. The functional and data path requirements to support selected applications and configurations are defined. Finally, practical considerations for implementation (i.e., available hardware and costs) are discussed.
Research misconduct oversight: defining case costs.
Gammon, Elizabeth; Franzini, Luisa
2013-01-01
This study uses a sequential mixed method study design to define cost elements of research misconduct among faculty at academic medical centers. Using time driven activity based costing, the model estimates a per case cost for 17 cases of research misconduct reported by the Office of Research Integrity for the period of 2000-2005. Per case cost of research misconduct was found to range from $116,160 to $2,192,620. Research misconduct cost drivers are identified.
1989-01-01
This Central African Republic Act provides the following with respect to the right to health care: "1. All citizens have the right to health. 2. All citizens shall be entitled to a free choice of physician. 3. The rights referred to in Sections 1 and 2 shall be conditional on the financial contribution of the citizen in question for the various health benefits made available to him by the public health services as a whole. 4. The Government shall define general policy, determine the organization and functioning of the public and private health services, and improvement of the health of the population, and to improve the lot of the individuals and social groups making up the national community. 5. In order to ensure that establishments in the public sector function properly, their administration shall be carried out either within the normal framework of the financial system of the public sector or in partial independence. 6. The tariffs for all of the services provided in establishments in the public sector shall be fixed within the framework of the Law on finance. 7. In the payment of health costs, the practice of payment by a third party shall be authorized. To this end, contracts to meet the costs of health services for employees in the private or public sectors may be concluded between, on the one hand, the department responsible for public health, and, on the other, private corporations and undertakings, quasi-public corporations, associations and corporations, and mutual social welfare associations. 8. Civil servants and other employees of the State and their families and other social and economic categories shall contribute to health costs in accordance with a proportion which shall be defined by a decree adopted in the Council of Ministers. 9. The State shall meet the health costs of patients recognized as welfare cases. Only patients holding a welfare card may have their expenses met in this way. The card shall be issued by the State or local authorities in accordance with the relevant rules in force. The State shall define the amount of the contribution payable by the local authorities."
40 CFR 35.928-4 - Moratorium on industrial cost recovery payments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... industrial users defined in paragraphs (a) and (b) of the definition in § 35.905 pay industrial cost recovery... industrial cost recovery charges incurred for accounting periods or portions of periods ending before January... defined in paragraphs (a) and (b) of the definition in § 35.905 to pay industrial cost recovery payments...
An approach to rescheduling activities based on determination of priority and disruptivity
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.; Johnston, Mark D.
1990-01-01
A constraint-based scheduling system called SPIKE is being used to create long term schedules for the Hubble Space Telescope. Feedback for the spacecraft or from other ground support systems may invalidate some scheduling decisions and those activities concerned must be reconsidered. A function rescheduling priority is defined which for a given activity performs a heuristic analysis and produces a relative numerical value which is used to rank all such entities in the order that they should be rescheduled. A function disruptivity is also defined that is used to place a relative numeric value on how much a pre-existing schedule would be changed in order to reschedule an activity. Using these functions, two algorithms (a stochastic neural network approach and an exhaustive search approach) are proposed to find the best place to reschedule an activity. Prototypes were implemented and preliminary testing reveals that the exhaustive technique produces only marginally better results at much greater computational cost.
Measuring multi-configurational character by orbital entanglement
NASA Astrophysics Data System (ADS)
Stein, Christopher J.; Reiher, Markus
2017-09-01
One of the most critical tasks at the very beginning of a quantum chemical investigation is the choice of either a multi- or single-configurational method. Naturally, many proposals exist to define a suitable diagnostic of the multi-configurational character for various types of wave functions in order to assist this crucial decision. Here, we present a new orbital-entanglement-based multi-configurational diagnostic termed Zs(1). The correspondence of orbital entanglement and static (or non-dynamic) electron correlation permits the definition of such a diagnostic. We chose our diagnostic to meet important requirements such as well-defined limits for pure single-configurational and multi-configurational wave functions. The Zs(1) diagnostic can be evaluated from a partially converged, but qualitatively correct, and therefore inexpensive density matrix renormalisation group wave function as in our recently presented automated active orbital selection protocol. Its robustness and the fact that it can be evaluated at low cost make this diagnostic a practical tool for routine applications.
NASA Technical Reports Server (NTRS)
1983-01-01
Standard descriptions for solar thermal power plants are established and uniform costing methodologies for nondevelopmental balance of plant (BOP) items are developed. The descriptions and methodologies developed are applicable to the major systems. These systems include the central receiver, parabolic dish, parabolic trough, hemispherical bowl, and solar pond. The standard plant is defined in terms of four categories comprising (1) solar energy collection, (2) power conversion, (3) energy storage, and (4) balance of plant. Each of these categories is described in terms of the type and function of components and/or subsystems within the category. A detailed description is given for the BOP category. BOP contains a number of nondevelopmental items that are common to all solar thermal systems. A standard methodology for determining the costs of these nondevelopmental BOP items is given. The methodology is presented in the form of cost equations involving cost factors such as unit costs. A set of baseline values for the normalized cost factors is also given.
Balancing building and maintenance costs in growing transport networks
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco
2017-09-01
The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.
iSDS: a self-configurable software-defined storage system for enterprise
NASA Astrophysics Data System (ADS)
Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen
2018-01-01
Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.
FEL for the polymer processing industries
NASA Astrophysics Data System (ADS)
Kelley, Michael J.
1997-05-01
Polymers are everywhere in modern life because of their unique combination of end-use functionalities, ease of processing, recycling potential and modest cost. The physical and economic scope of the infrastructure committed to present polymers makes the introduction of entirely new chemistry unlikely. Rather, the breadth of commercial offerings more likely to shrink in the face of the widening mandate for recycling, especially of packaging. Improved performance and new functionality must therefore come by routes such as surface modification. However they must come with little environmental impact and at painfully low cost. Processing with strongly absorbed light offers unique advantages. The journal and patent literatures disclose a number of examples of benefits that can be achieved, principally by use of excimer lasers or special UV lamps. Examples of commercialization are few, however, because of the unit cost and maximum scale of existing light sources. A FEL, however, offers unique advantages: tunability to the optimum wavelength, potential for scale up to high average power, and a path to attractively low unit cost of light. A business analysis of prospective applications defines the technical and economic requirements a FEL for polymer surface processing must meet. These are compared to FEL technology as it now stands and as it is envisioned.
Hanford business structure for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.
The Hanford Business Structure integrates the project`s technical, schedule, and cost baselines; implements the use of a standard code of accounts; and streamlines performance reporting and cost collection. Technical requirements drive the technical functions and come from the RDD 100 database. The functions will be identified in the P3 scheduling system and also in the PeopleSoft system. Projects will break their work down from the technical requirements in the P3 schedules. When the level at which they want to track cost via the code of accounts is reached, a Project ID will be generated in the PeopleSoft system. P3 maymore » carry more detailed schedules below the Project ID level. The standard code of accounts will identify discrete work activities done across the site and various projects. They will include direct and overhead type work scopes. Activities in P3 will roll up to this standard code of accounts. The field that will be used to record this in PeopleSoft is ``Activity``. In Passport it is a user-defined field. It will have to be added to other feeder systems. Project ID and code of accounts are required fields on all cost records.« less
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
Organizing for low cost space transportation
NASA Technical Reports Server (NTRS)
Lee, C. M.
1977-01-01
The paper describes the management concepts and organizational structure NASA is establishing to operate the Space Transportation System. Policies which would encourage public and commercial organizations and private individuals to use the new STS are discussed, and design criteria for experiments, spacecraft, and other systems elements are considered. The design criteria are intented to facilitate cost reductions for space operations. NASA plans for the transition from currently used expendable launch vehicles to Shuttle use and Shuttle pricing policies are explained in detail. Hardware development is basically complete, management functions have been defined, pricing policies have been published, and procedures for user contact and services have been places into operation.
Storage capacity: how big should it be
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malina, M.A.
1980-01-28
A mathematical model was developed for determining the economically optimal storage capacity of a given material or product at a manufacturing plant. The optimum was defined as a trade-off between the inventory-holding costs and the cost of customer-service failures caused by insufficient stocks for a peak-demand period. The order-arrival, production, storage, and shipment process was simulated by Monte Carlo techniques to calculate the probability of order delays for various lengths of time as a function of storage capacity. Example calculations for the storage of a bulk liquid chemical in tanks showed that the conclusions arrived at, via this model, aremore » comparatively insensitive to errors made in estimating the capital cost of storage or the risk of losing an order because of a late delivery.« less
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2017-04-01
Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1990-01-01
Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.
Climate Intervention as an Optimization Problem
NASA Astrophysics Data System (ADS)
Caldeira, Ken; Ban-Weiss, George A.
2010-05-01
Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.
Ashley, Dennis W; Mullins, Robert F; Dente, Christopher J; Garlow, Laura; Medeiros, Regina S; Atkins, Elizabeth V; Solomon, Gina; Abston, Dena; Ferdinand, Colville H
2017-09-01
Trauma center readiness costs are incurred to maintain essential infrastructure and capacity to provide emergent services on a 24/7 basis. These costs are not captured by traditional hospital cost accounting, and no national consensus exists on appropriate definitions for each cost. Therefore, in 2010, stakeholders from all Level I and II trauma centers developed a survey tool standardizing and defining trauma center readiness costs. The survey tool underwent minor revisions to provide further clarity, and the survey was repeated in 2013. The purpose of this study was to provide a follow-up analysis of readiness costs for Georgia's Level I and Level II trauma centers. Using the American College of Surgeons Resources for Optimal Care of the Injured Patient guidelines, four readiness cost categories were identified: Administrative, Clinical Medical Staff, Operating Room, and Education/Outreach. Through conference calls, webinars and face-to-face meetings with financial officers, trauma medical directors, and program managers from all trauma centers, standardized definitions for reporting readiness costs within each category were developed. This resulted in a survey tool for centers to report their individual readiness costs for one year. The total readiness cost for all Level I trauma centers was $34,105,318 (avg $6,821,064) and all Level II trauma centers was $20,998,019 (avg $2,333,113). Methodology to standardize and define readiness costs for all trauma centers within the state was developed. Average costs for Level I and Level II trauma centers were identified. This model may be used to help other states define and standardize their trauma readiness costs.
Frank, Ellen; Rush, A John; Blehar, Mary; Essock, Susan; Hargreaves, William; Hogan, Michael; Jarrett, Robin; Johnson, Robert L; Katon, Wayne J; Lavori, Phillip; McNulty, James P; Niederehe, George; Ryan, Neal; Stuart, Gail; Thomas, Stephen B; Tollefson, Gary D; Vitiello, Benedetto
2002-09-15
As part of the National Institute of Mental Health Strategic Plan for Mood Disorders Research effort, the Clinical Trials and Translation Workgroup was asked to define priorities for clinical trials in mood disorders and for research on how best to translate the results of such research to clinical practice settings. Through two face-to-face meetings and a series of conference calls, we established priorities based on the literature to date and what was known about research currently in progress in this area. We defined five areas of priority that cut across developmental stages, while noting that research on adult mood disorders was at a more advanced stage in each of these areas than research on child or geriatric disorders. The five areas of priority are: 1) maximizing the effectiveness and cost-effectiveness of initial (acute) treatments for mood disorders already known to be efficacious in selected populations and settings when they are applied across all populations and care settings; 2) learning what further treatments or services are most likely to reduce symptoms and improve functioning when the first treatment is delivered well, but the mood disorder does not remit or show adequate improvement; 3) learning what treatments or services are most cost-effective in preventing recurrence or relapse and maintaining optimal functioning after a patient's mood disorder has remitted or responded maximally to treatment; 4) developing and validating clinical, psychosocial, biological, or other markers that predict: a) which treatments are most effective, b) course of illness, c) risk of adverse events/tolerability and acceptability for individual patients or well-defined subgroups of patients; 5) developing clinical trial designs and methods that result in lower research costs and greater generalizability earlier in the treatment development and testing process. A rationale for the importance of each of these priorities is provided.
HSI top-down requirements analysis for ship manpower reduction
NASA Astrophysics Data System (ADS)
Malone, Thomas B.; Bost, J. R.
2000-11-01
U.S. Navy ship acquisition programs such as DD 21 and CVNX are increasingly relying on top down requirements analysis (TDRA) to define and assess design approaches for workload and manpower reduction, and for ensuring required levels of human performance, reliability, safety, and quality of life at sea. The human systems integration (HSI) approach to TDRA begins with a function analysis which identifies the functions derived from the requirements in the Operational Requirements Document (ORD). The function analysis serves as the function baseline for the ship, and also supports the definition of RDT&E and Total Ownership Cost requirements. A mission analysis is then conducted to identify mission scenarios, again based on requirements in the ORD, and the Design Reference Mission (DRM). This is followed by a mission/function analysis which establishes the function requirements to successfully perform the ship's missions. Function requirements of major importance for HSI are information, performance, decision, and support requirements associated with each function. An allocation of functions defines the roles of humans and automation in performing the functions associated with a mission. Alternate design concepts, based on function allocation strategies, are then described, and task networks associated with the concepts are developed. Task network simulations are conducted to assess workloads and human performance capabilities associated with alternate concepts. An assessment of the affordability and risk associated with alternate concepts is performed, and manning estimates are developed for feasible design concepts.
Scenario for concurrent conceptual assembly line design: A case study
NASA Astrophysics Data System (ADS)
Mas, F.; Ríos, J.; Menéndez, J. L.
2012-04-01
The decision to design and build a new aircraft is preceded by years of research and study. Different disciplines work together throughout the lifecycle to ensure not only a complete functional definition of the product, but also a complete industrialization, a marketing plan, a maintenance plan, etc. This case study focuses on the conceptual design phase. During this phase, the design solutions that will meet the functional and industrial requirements are defined, i.e.: the basic requirements of industrialization. During this phase, several alternatives are studied, and the most attractive in terms of performance and cost requirements is selected. As a result of the study of these alternatives, it is possible to define an early conceptual design of the assembly line and its basic parameters. The plant needs, long cycle jigs & tools or industrial means and human resources with the necessary skills can be determined in advance.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
NASA Technical Reports Server (NTRS)
1974-01-01
The work breakdown structure (WBS) dictionary for the Earth Observatory Satellite (EOS) is defined. The various elements of the EOS program are examined to include the aggregate of hardware, computer software, services, and data required to develop, produce, test, support, and operate the space vehicle and the companion ground data management system. A functional analysis of the EOS mission is developed. The operations for three typical EOS missions, Delta, Titan, and Shuttle launched are considered. The functions were determined for the top program elements, and the mission operations, function 2.0, was expanded to level one functions. Selection of ten level one functions for further analysis to level two and three functions were based on concern for the EOS operations and associated interfaces.
Lee, Daniel J; Cheetham, Philippa; Badani, Ketan K
2010-02-01
Therapy (case series). 4. To evaluate factors that affect compliance in men who enroll in a phosphodiesterase type 5 inhibitor (PDE5I) protocol after nerve-sparing robot-assisted prostatectomy (RAP), and report on short-term outcomes, as PDE5Is may help restore erectile function after RAP and patient adherence to the regimen is a factor that potentially can affect outcome. We prospectively followed 77 men who had nerve-sparing RAP and enrolled in a postoperative penile rehabilitation protocol. The men received either sildenafil citrate or tadalafil three times weekly. The minimum follow-up was 8 weeks. Potency was defined as erection adequate for penetration and complete intercourse. Compliance was defined as men adhering to the regimen for > or =2 months. The mean age of the cohort was 57.8 years and the median follow-up was 8 months. In all, 32% of the men discontinued the therapy <2 months after RAP and were deemed noncompliant with an additional 39% discontinuing therapy by 6 months, with the high cost of medication being the primary reason (65%). Long-term compliance and preoperative erectile dysfunction were independent predictors of potency return after adjusting for age and nerve sparing. The high cost of medication remains a significant barrier to maintaining therapy. Noncompliance to PDE5I therapy in a tertiary care centre was much higher than reported in clinical trial settings. With longer-term follow-up, we need to further define the factors that improve overall recovery of sexual function after RAP.
An Efficient Scheduling Scheme on Charging Stations for Smart Transportation
NASA Astrophysics Data System (ADS)
Kim, Hye-Jin; Lee, Junghoon; Park, Gyung-Leen; Kang, Min-Jae; Kang, Mikyung
This paper proposes a reservation-based scheduling scheme for the charging station to decide the service order of multiple requests, aiming at improving the satisfiability of electric vehicles. The proposed scheme makes it possible for a customer to reduce the charge cost and waiting time, while a station can extend the number of clients it can serve. A linear rank function is defined based on estimated arrival time, waiting time bound, and the amount of needed power, reducing the scheduling complexity. Receiving the requests from the clients, the power station decides the charge order by the rank function and then replies to the requesters with the waiting time and cost it can guarantee. Each requester can decide whether to charge at that station or try another station. This scheduler can evolve to integrate a new pricing policy and services, enriching the electric vehicle transport system.
NASA Astrophysics Data System (ADS)
Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh
2016-09-01
In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.
RETScreen Plus Software Tutorial
NASA Technical Reports Server (NTRS)
Ganoe, Rene D.; Stackhouse, Paul W., Jr.; DeYoung, Russell J.
2014-01-01
Greater emphasis is being placed on reducing both the carbon footprint and energy cost of buildings. A building's energy usage depends upon many factors one of the most important is the local weather and climate conditions to which it's electrical, heating and air conditioning systems must respond. Incorporating renewable energy systems, including solar systems, to supplement energy supplies and increase energy efficiency is important to saving costs and reducing emissions. Also retrofitting technologies to buildings requires knowledge of building performance in its current state, potential future climate state, projection of potential savings with capital investment, and then monitoring the performance once the improvements are made. RETScreen Plus is a performance analysis software module that supplies the needed functions of monitoring current building performance, targeting projected energy efficiency improvements and verifying improvements once completed. This tutorial defines the functions of RETScreen Plus as well as outlines the general procedure for monitoring and reporting building energy performance.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
NASA Technical Reports Server (NTRS)
Bickford, R. L.; Collamore, F. N.; Gage, M. L.; Morgan, D. B.; Thomas, E. R.
1992-01-01
The objectives of this task were to: (1) estimate the technology readiness of an integrated control and health monitoring (ICHM) system for the Aerojet 7500 lbF Orbit Transfer Vehicle engine preliminary design assuming space based operations; and (2) estimate the remaining cost to advance this technology to a NASA defined 'readiness level 6' by 1996 wherein the technology has been demonstrated with a system validation model in a simulated environment. The work was accomplished through the conduct of four subtasks. In subtask 1 the minimally required functions for the control and monitoring system was specified. The elements required to perform these functions were specified in Subtask 2. In Subtask 3, the technology readiness level of each element was assessed. Finally, in Subtask 4, the development cost and schedule requirements were estimated for bringing each element to 'readiness level 6'.
Jones, Roy W; Romeo, Renee; Trigg, Richard; Knapp, Martin; Sato, Azusa; King, Derek; Niecko, Timothy; Lacey, Loretto
2015-03-01
Most models determining how patient and caregiver characteristics and costs change with Alzheimer's disease (AD) progression focus on one aspect, for example, cognition. AD is inadequately defined by a single domain; tracking progression by focusing on a single aspect may mean other important aspects are insufficiently addressed. Dependence has been proposed as a better marker for following disease progression. This was a cross-sectional observational study (18 UK sites). Two hundred forty-nine community or institutionalized patients, with possible/probable AD, Mini-Mental State Examination (3-26), and a knowledgeable informant participated. Significant associations noted between dependence (Dependence Scale [DS]) and clinical measures of severity (cognition, function, and behavior). Bivariate and multivariate models demonstrated significant associations between DS and service use cost, patient quality of life, and caregiver perceived burden. The construct of dependence may help to translate the combined impact of changes in cognition, function, and behavior into a more readily interpretable form. The DS is useful for assessing patients with AD in clinical trials/research. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Wind farm topology-finding algorithm considering performance, costs, and environmental impacts.
Tazi, Nacef; Chatelet, Eric; Bouzidi, Youcef; Meziane, Rachid
2017-06-05
Optimal power in wind farms turns to be a modern problem for investors and decision makers; onshore wind farms are subject to performance and economic and environmental constraints. The aim of this work is to define the best installed capacity (best topology) with maximum performance and profits and consider environmental impacts as well. In this article, we continue the work recently done on wind farm topology-finding algorithm. The proposed resolution technique is based on finding the best topology of the system that maximizes the wind farm performance (availability) under the constraints of costs and capital investments. Global warming potential of wind farm is calculated and taken into account in the results. A case study is done using data and constraints similar to those collected from wind farm constructors, managers, and maintainers. Multi-state systems (MSS), universal generating function (UGF), wind, and load charge functions are applied. An economic study was conducted to assess the wind farm investment. Net present value (NPV) and levelized cost of energy (LCOE) were calculated for best topologies found.
Whitehurst, David G T; Bryan, Stirling; Lewis, Martyn; Hill, Jonathan; Hay, Elaine M
2012-11-01
Stratified management for low back pain according to patients' prognosis and matched care pathways has been shown to be an effective treatment approach in primary care. The aim of this within-trial study was to determine the economic implications of providing such an intervention, compared with non-stratified current best practice, within specific risk-defined subgroups (low-risk, medium-risk and high-risk). Within a cost-utility framework, the base-case analysis estimated the incremental healthcare cost per additional quality-adjusted life year (QALY), using the EQ-5D to generate QALYs, for each risk-defined subgroup. Uncertainty was explored with cost-utility planes and acceptability curves. Sensitivity analyses were performed to consider alternative costing methodologies, including the assessment of societal loss relating to work absence and the incorporation of generic (ie, non-back pain) healthcare utilisation. The stratified management approach was a cost-effective treatment strategy compared with current best practice within each risk-defined subgroup, exhibiting dominance (greater benefit and lower costs) for medium-risk patients and acceptable incremental cost to utility ratios for low-risk and high-risk patients. The likelihood that stratified care provides a cost-effective use of resources exceeds 90% at willingness-to-pay thresholds of £4000 (≈ 4500; $6500) per additional QALY for the medium-risk and high-risk groups. Patients receiving stratified care also reported fewer back pain-related days off work in all three subgroups. Compared with current best practice, stratified primary care management for low back pain provides a highly cost-effective use of resources across all risk-defined subgroups.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Optimal Investment in HIV Prevention Programs: More Is Not Always Better
Brandeau, Margaret L.; Zaric, Gregory S.
2008-01-01
This paper develops a mathematical/economic framework to address the following question: Given a particular population, a specific HIV prevention program, and a fixed amount of funds that could be invested in the program, how much money should be invested? We consider the impact of investment in a prevention program on the HIV sufficient contact rate (defined via production functions that describe the change in the sufficient contact rate as a function of expenditure on a prevention program), and the impact of changes in the sufficient contact rate on the spread of HIV (via an epidemic model). In general, the cost per HIV infection averted is not constant as the level of investment changes, so the fact that some investment in a program is cost effective does not mean that more investment in the program is cost effective. Our framework provides a formal means for determining how the cost per infection averted changes with the level of expenditure. We can use this information as follows: When the program has decreasing marginal cost per infection averted (which occurs, for example, with a growing epidemic and a prevention program with increasing returns to scale), it is optimal either to spend nothing on the program or to spend the entire budget. When the program has increasing marginal cost per infection averted (which occurs, for example, with a shrinking epidemic and a prevention program with decreasing returns to scale), it may be optimal to spend some but not all of the budget. The amount that should be spent depends on both the rate of disease spread and the production function for the prevention program. We illustrate our ideas with two examples: that of a needle exchange program, and that of a methadone maintenance program. PMID:19938440
Gait as solution, but what is the problem? Exploring cost, economy and compromise in locomotion.
Bertram, John E A
2013-12-01
Many studies have examined how legged mammals move, defining 'what' happens in locomotion. However, few ask 'why' those motions occur as they do. The energetic and functional constraints acting on an animal require that locomotion should be metabolically 'cost effective' and this in large part determines the strategies available to accomplish the task. Understanding the gaits utilised, within the spectrum of gaits possible, and determination of the value of specific relationships among speed, stride length, stride frequency and morphology, depends on identifying the fundamental costs involved and the effects of different movement strategies on those costs. It is argued here that a fundamental loss associated with moving on limbs (centre of mass momentum and energy loss) and two costs involved with controlling and replacing that loss (muscular work of the supporting limb during stance and muscular work of repositioning the limbs during swing) interact to determine the cost trade-offs involved and the optimisation strategies available for each species and speed. These optimisation strategies are what has been observed and characterised as gait. Copyright © 2013 Elsevier Ltd. All rights reserved.
42 CFR 417.454 - Charges to Medicare enrollees.
Code of Federal Regulations, 2012 CFR
2012-10-01
... preventive services (as defined in § 410.152(l)). (e) Services for which cost sharing may not exceed cost...(b)(14)(B) of the Act. (3) Skilled nursing care defined as services provided during a covered stay in a skilled nursing facility during the period for which cost sharing would apply under Original...
42 CFR 417.454 - Charges to Medicare enrollees.
Code of Federal Regulations, 2013 CFR
2013-10-01
... preventive services (as defined in § 410.152(l)). (e) Services for which cost sharing may not exceed cost...(b)(14)(B) of the Act. (3) Skilled nursing care defined as services provided during a covered stay in a skilled nursing facility during the period for which cost sharing would apply under Original...
42 CFR 417.454 - Charges to Medicare enrollees.
Code of Federal Regulations, 2014 CFR
2014-10-01
... preventive services (as defined in § 410.152(l)). (e) Services for which cost sharing may not exceed cost...(b)(14)(B) of the Act. (3) Skilled nursing care defined as services provided during a covered stay in a skilled nursing facility during the period for which cost sharing would apply under Original...
Placebo effect of medication cost in Parkinson disease: a randomized double-blind study.
Espay, Alberto J; Norris, Matthew M; Eliassen, James C; Dwivedi, Alok; Smith, Matthew S; Banks, Christi; Allendorfer, Jane B; Lang, Anthony E; Fleck, David E; Linke, Michael J; Szaflarski, Jerzy P
2015-02-24
To examine the effect of cost, a traditionally "inactive" trait of intervention, as contributor to the response to therapeutic interventions. We conducted a prospective double-blind study in 12 patients with moderate to severe Parkinson disease and motor fluctuations (mean age 62.4 ± 7.9 years; mean disease duration 11 ± 6 years) who were randomized to a "cheap" or "expensive" subcutaneous "novel injectable dopamine agonist" placebo (normal saline). Patients were crossed over to the alternate arm approximately 4 hours later. Blinded motor assessments in the "practically defined off" state, before and after each intervention, included the Unified Parkinson's Disease Rating Scale motor subscale, the Purdue Pegboard Test, and a tapping task. Measurements of brain activity were performed using a feedback-based visual-motor associative learning functional MRI task. Order effect was examined using stratified analysis. Although both placebos improved motor function, benefit was greater when patients were randomized first to expensive placebo, with a magnitude halfway between that of cheap placebo and levodopa. Brain activation was greater upon first-given cheap but not upon first-given expensive placebo or by levodopa. Regardless of order of administration, only cheap placebo increased activation in the left lateral sensorimotor cortex and other regions. Expensive placebo significantly improved motor function and decreased brain activation in a direction and magnitude comparable to, albeit less than, levodopa. Perceptions of cost are capable of altering the placebo response in clinical studies. This study provides Class III evidence that perception of cost is capable of influencing motor function and brain activation in Parkinson disease. © 2015 American Academy of Neurology.
Barmpoutis, Angelos
2010-01-01
Registration of Diffusion-Weighted MR Images (DW-MRI) can be achieved by registering the corresponding 2nd-order Diffusion Tensor Images (DTI). However, it has been shown that higher-order diffusion tensors (e.g. order-4) outperform the traditional DTI in approximating complex fiber structures such as fiber crossings. In this paper we present a novel method for unbiased group-wise non-rigid registration and atlas construction of 4th-order diffusion tensor fields. To the best of our knowledge there is no other existing method to achieve this task. First we define a metric on the space of positive-valued functions based on the Riemannian metric of real positive numbers (denoted by ℝ+). Then, we use this metric in a novel functional minimization method for non-rigid 4th-order tensor field registration. We define a cost function that accounts for the 4th-order tensor re-orientation during the registration process and has analytic derivatives with respect to the transformation parameters. Finally, the tensor field atlas is computed as the minimizer of the variance defined using the Riemannian metric. We quantitatively compare the proposed method with other techniques that register scalar-valued or diffusion tensor (rank-2) representations of the DWMRI. PMID:20436782
Life sciences payload definition and integration study, task C and D. Volume 1: Management summary
NASA Technical Reports Server (NTRS)
1973-01-01
The findings of a study to define the required payloads for conducting life science experiments in space are presented. The primary objectives of the study are: (1) identify research functions to be performed aboard life sciences spacecraft laboratories and necessary equipment, (2) develop conceptual designs of potential payloads, (3) integrate selected laboratory designs with space shuttle configurations, and (4) establish cost analysis of preliminary program planning.
The kW power module evolution study: Executive summary
NASA Technical Reports Server (NTRS)
1979-01-01
Using the Marshall Space Flight Center 25 kW Power Module (PM) reference design as a point of departure, the study defined evolutionary growth paths to 100 kW and above. A recommended development approach and initial configurations were described. Specific hardware changes from the reference design are recommended for the initial PM configuration to ensure evolutionary growth, improved replicability, and reduced cost. Certain functional changes are also recommended to enhance system capabilities.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
A stochastic optimal feedforward and feedback control methodology for superagility
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.
1992-01-01
A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
NASA Technical Reports Server (NTRS)
Vaisnys, A.
1980-01-01
It is technically feasible to design a satellite communication system to serve the United States electric utility industry's needs relative to load management, real-time operations management, remote meter reading and to determine the costs of various elements of the system. The functions associated with distribution automation and control and communication system requirements are defined. Factors related to formulating viable communication concepts, the relationship of various design factors to utility operating practices, and the results of the cost analysis are discussed The system concept and several ways in which the concept could be integrated into the utility industry are described.
Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Lux, James P.
2014-01-01
The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among different radio components.
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
The microwave rectifier technology, approaches to the receiving antenna, topology of rectenna circuits, assembly and construction, ROM cost estimates are discussed. Analyses and cost estimates for the equipment required to transmit the ground power to an external user. Noise and harmonic considerations are presented for both the amplitron and klystron and interference limits are identified and evaluated. The risk assessment discussion is discussed wherein technology risks are rated and ranked with regard to their importance in impacting the microwave power transmission system. The system analyses and evaluation are included of parametric studies of system relationships pertaining to geometry, materials, specific cost, specific weight, efficiency, converter packing, frequency selection, power distribution, power density, power output magnitude, power source, transportation and assembly. Capital costs per kW and energy costs as a function of rate of return, power source and transportation costs as well as build cycle time are presented. The critical technology and ground test program are discussed along with ROM costs and schedule. The orbital test program with associated critical technology and ground based program based on full implementation of the defined objectives is discussed.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747
Stahl, Joachim S; Wang, Song
2008-03-01
Many natural and man-made structures have a boundary that shows a certain level of bilateral symmetry, a property that plays an important role in both human and computer vision. In this paper, we present a new grouping method for detecting closed boundaries with symmetry. We first construct a new type of grouping token in the form of symmetric trapezoids by pairing line segments detected from the image. A closed boundary can then be achieved by connecting some trapezoids with a sequence of gap-filling quadrilaterals. For such a closed boundary, we define a unified grouping cost function in a ratio form: the numerator reflects the boundary information of proximity and symmetry and the denominator reflects the region information of the enclosed area. The introduction of the region-area information in the denominator is able to avoid a bias toward shorter boundaries. We then develop a new graph model to represent the grouping tokens. In this new graph model, the grouping cost function can be encoded by carefully designed edge weights and the desired optimal boundary corresponds to a special cycle with a minimum ratio-form cost. We finally show that such a cycle can be found in polynomial time using a previous graph algorithm. We implement this symmetry-grouping method and test it on a set of synthetic data and real images. The performance is compared to two previous grouping methods that do not consider symmetry in their grouping cost functions.
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.
Scoping Planning Agents With Shared Models
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy D.; Jonsson, Ari K.; McGann, Conor
2003-01-01
In this paper we provide a formal framework to define the scope of planning agents based on a single declarative model. Having multiple agents sharing a single model provides numerous advantages that lead to reduced development costs and increase reliability of the system. We formally define planning in terms of extensions of an initial partial plan, and a set of flaws that make the plan unacceptable. A Flaw Filter (FF) allows us to identify those flaws relevant to an agent. Flaw filters motivate the Plan Identification Function (PIF), which specifies when an agent is is ready hand control to another agent for further work. PIFs define a set of plan extensions that can be generated from a model and a plan request. FFs and PIFs can be used to define the scope of agents without changing the model. We describe an implementation of PIFsand FFswithin the context of EUROPA, a constraint-based planning architecture, and show how it can be used to easily design many different agents.
The role of redundant information in cultural transmission and cultural stabilization.
Acerbi, Alberto; Tennie, Claudio
2016-02-01
Redundant copying has been proposed as a manner to achieve the high-fidelity necessary to pass on and preserve complex traits in human cultural transmission. There are at least 2 ways to define redundant copying. One refers to the possibility of copying repeatedly the same trait over time, and another to the ability to exploit multiple layers of information pointing to the same trait during a single copying event. Using an individual-based model, we explore how redundant copying (defined as in the latter way) helps to achieve successful transmission. The authors show that increasing redundant copying increases the likelihood of accurately transmitting a behavior more than either augmenting the number of copying occasions across time or boosting the general accuracy of social learning. They also investigate how different cost functions, deriving, for example, from the need to invest more energy in cognitive processing, impact the evolution of redundant copying. The authors show that populations converge either to high-fitness/high-costs states (with high redundant copying and complex culturally transmitted behaviors; resembling human culture) or to low-fitness/low-costs states (with low redundant copying and simple transmitted behaviors; resembling social learning forms typical of nonhuman animals). This outcome may help to explain why cumulative culture is rare in the animal kingdom. (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
State Univ. of New York, Buffalo. Western New York School Study Council.
Cost effectiveness analysis is used in situations where benefits and costs are not readily converted into a money base. Five elements can be identified in such an analytic process: (1) The objective must be defined in terms of what it is and how it is attained; (2) alternatives to the objective must be clearly definable; (3) the costs must be…
Grazzini, Giuliano; Ceccarelli, Anna; Calteri, Deanna; Catalano, Liviana; Calizzani, Gabriele; Cicchetti, Americo
2013-01-01
Background In Italy, the financial reimbursement for labile blood components exchanged between Regions is regulated by national tariffs defined in 1991 and updated in 1993–2003. Over the last five years, the need for establishing standard costs of healthcare services has arisen critically. In this perspective, the present study is aimed at defining both the costs of production of blood components and the related prices, as well as the prices of plasma-derived medicinal products obtained by national plasma, to be used for interregional financial reimbursement. Materials and methods In order to analyse the costs of production of blood components, 12 out 318 blood establishments were selected in 8 Italian Regions. For each step of the production process, driving costs were identified and production costs were. To define the costs of plasma-derived medicinal products obtained by national plasma, industrial costs currently sustained by National Health Service for contract fractionation were taken into account. Results The production costs of plasma-derived medicinal products obtained from national plasma showed a huge variability among blood establishments, which was much lower after standardization. The new suggested plasma tariffs were quite similar to those currently in force. Comparing the overall costs theoretically sustained by the National Health Service for plasma-derived medicinal products obtained from national plasma to current commercial costs, demonstrates that the national blood system could gain a 10% cost saving if it were able to produce plasma for fractionation within the standard costs defined in this study. Discussion Achieving national self-sufficiency through the production of plasma-derived medicinal products from national plasma, is a strategic goal of the National Health Service which must comply not only with quality, safety and availability requirements but also with the increasingly pressing need for economic sustainability. PMID:24333307
Grazzini, Giuliano; Ceccarelli, Anna; Calteri, Deanna; Catalano, Liviana; Calizzani, Gabriele; Cicchetti, Americo
2013-09-01
In Italy, the financial reimbursement for labile blood components exchanged between Regions is regulated by national tariffs defined in 1991 and updated in 1993-2003. Over the last five years, the need for establishing standard costs of healthcare services has arisen critically. In this perspective, the present study is aimed at defining both the costs of production of blood components and the related prices, as well as the prices of plasma-derived medicinal products obtained by national plasma, to be used for interregional financial reimbursement. In order to analyse the costs of production of blood components, 12 out 318 blood establishments were selected in 8 Italian Regions. For each step of the production process, driving costs were identified and production costs were. To define the costs of plasma-derived medicinal products obtained by national plasma, industrial costs currently sustained by National Health Service for contract fractionation were taken into account. The production costs of plasma-derived medicinal products obtained from national plasma showed a huge variability among blood establishments, which was much lower after standardization. The new suggested plasma tariffs were quite similar to those currently in force. Comparing the overall costs theoretically sustained by the National Health Service for plasma-derived medicinal products obtained from national plasma to current commercial costs, demonstrates that the national blood system could gain a 10% cost saving if it were able to produce plasma for fractionation within the standard costs defined in this study. Achieving national self-sufficiency through the production of plasma-derived medicinal products from national plasma, is a strategic goal of the National Health Service which must comply not only with quality, safety and availability requirements but also with the increasingly pressing need for economic sustainability.
Chen, Zhi-bin; Liang, Yan-bing; Tang, Hao; Wang, Zhong-hua; Zeng, Li-jin; Wu, Jing-guo; Li, Zhen-yu; Ma, Zhong-fu
2012-12-01
To improve cost-efficiency, discriminant functions in stepwise method was founded for the differential diagnosis of angina pectoris by detecting the serum level of high-sensitivity C-reactive protein (hs-CRP), macrophage migration inhibitory factor (MIF), interleukin-4 (IL-4) and interleukin-10 (IL-10) in patients with stable angina pectoris (SAP) and unstable angina pectoris (UAP). Thirty-nine SAP patients and 47 UAP patients were enrolled into the study, while 39 healthy volunteers were enrolled into the controlled group forming the entire set of training samples. The serum levels of hs-CRP, MIF, IL-4 and IL-10 were measured by enzyme linked immunosorbent assay (ELISA). Data was analyzed by software to define discriminant functions in the ways of "entering" and "stepwise". Both functions were evaluated by the results of validation. By the way of "enter independent together", the following discriminant functions were defined based on the data of training samples' age, hs-CRP, MIF, IL-4, IL-10: healthy control group =-129.858 + 2.869×age -2.451×hs-CRP + 1.393×MIF + 6.001×IL-4 + 4.848×IL-10; SAP group=-161.037 + 2.896×age-2.022×hs-CRP + 1.662×MIF + 6.703×IL-4 + 6.287×IL-10; UAP group=-199.087 + 2.468×age-1.440×hs-CRP + 3.404×MIF-13.875×IL-4 + 7.752×IL-10. Retrospective validation showed 4.8% of total miss-grouping, while cross-validation showed 5.6% of total miss-grouping. By the way of "stepwise", the above data was screened by software and training samples' age, MIF and IL-10 were suggested to define the following functions: healthy control group = - 125.218 + 2.659 × age + 0.599×MIF + 5.040 × IL-10; SAP group=-157.864 + 2.721×age + 1.008×MIF + 6.468×IL-10; UAP group=- 197.327 + 2.360×age + 2.932×MIF + 7.640×IL-10. Both retrospective and cross validation showed 6.4% of total miss-grouping. Both sets of discriminant functions had the same efficiency (100%) for differential diagnosis of SAP and UAP. The discriminant functions based on samples' age, MIF and IL-10, which were screened and suggested by stepwise method, may contribute to the differential diagnosis of atypical SAP and UAP, and therefore demonstrate better cost-efficiency.
Principles of operating room organization.
Watkins, W D
1997-01-01
The importance of the changing health care climate has triggered important changes in the management of high-cost components of acute care facilities. By integrating and better managing various elements of the surgical process, health care institutions are able to rationally trim costs while maintaining high-quality services. The leadership that physicians can provide is crucial to the success of this undertaking (1). The importance of the use of primary data related to patient throughput and related resources should be strongly emphasized, for only when such data are converted to INFORMATION of functional value can participating healthcare personnel be reasonably expected to anticipate and respond to varying clinical demands with ever-limited resources. Despite the claims of specific commercial vendors, no single product will likely be sufficient to significantly change the perioperative process to the degree or for the duration demanded by healthcare reform. The most effective approach to achieving safety, cost-effectiveness, and predictable process in the realm of Surgical Services will occur by appropriate application of the "best of breed" contributions of: (a) medical/patient safety practice/oversight; (b) information technology; (c) contemporary management; and (d) innovative and functional cost-accounting methodology. S "modified activity-based cost accounting method" can serve as the basis for acquiring true direct-cost information related to the perioperative process. The proposed overall management strategy emphasizes process and feedback, rather than specific product, and although imposing initial demands and change on the traditional hospital setting, can advance the strongest competitive position in perioperative services. This comprehensive approach comprises a functional basis for important bench-marking activities among multiple surgical services. An active, comparative process of this type is of paramount importance in emphasizing patient care and safety as the highest priority while changing the process and cost of perioperative care. Additionally, this approach objectively defines the surgical process in terms by which the impact of new treatments, drugs, devices and process changes can be assessed rationally.
Index to Estimate the Efficiency of an Ophthalmic Practice.
Chen, Andrew; Kim, Eun Ah; Aigner, Dennis J; Afifi, Abdelmonem; Caprioli, Joseph
2015-08-01
A metric of efficiency, a function of the ratio of quality to cost per patient, will allow the health care system to better measure the impact of specific reforms and compare the effectiveness of each. To develop and evaluate an efficiency index that estimates the performance of an ophthalmologist's practice as a function of cost, number of patients receiving care, and quality of care. Retrospective review of 36 ophthalmology subspecialty practices from October 2011 to September 2012 at a university-based eye institute. The efficiency index (E) was defined as a function of adjusted number of patients (N(a)), total practice adjusted costs (C(a)), and a preliminary measure of quality (Q). Constant b limits E between 0 and 1. Constant y modifies the influence of Q on E. Relative value units and geographic cost indices determined by the Centers for Medicare and Medicaid for 2012 were used to calculate adjusted costs. The efficiency index is expressed as the following: E = b(N(a)/C(a))Q(y). Independent, masked auditors reviewed 20 random patient medical records for each practice and filled out 3 questionnaires to obtain a process-based quality measure. The adjusted number of patients, adjusted costs, quality, and efficiency index were calculated for 36 ophthalmology subspecialties. The median adjusted number of patients was 5516 (interquartile range, 3450-11,863), the median adjusted cost was 1.34 (interquartile range, 0.99-1.96), the median quality was 0.89 (interquartile range, 0.79-0.91), and the median value of the efficiency index was 0.26 (interquartile range, 0.08-0.42). The described efficiency index is a metric that provides a broad overview of performance for a variety of ophthalmology specialties as estimated by resources used and a preliminary measure of quality of care provided. The results of the efficiency index could be used in future investigations to determine its sensitivity to detect the impact of interventions on a practice such as training modules or practice restructuring.
Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie
2018-02-20
Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.
A safety-based decision making architecture for autonomous systems
NASA Technical Reports Server (NTRS)
Musto, Joseph C.; Lauderbaugh, L. K.
1991-01-01
Engineering systems designed specifically for space applications often exhibit a high level of autonomy in the control and decision-making architecture. As the level of autonomy increases, more emphasis must be placed on assimilating the safety functions normally executed at the hardware level or by human supervisors into the control architecture of the system. The development of a decision-making structure which utilizes information on system safety is detailed. A quantitative measure of system safety, called the safety self-information, is defined. This measure is analogous to the reliability self-information defined by McInroy and Saridis, but includes weighting of task constraints to provide a measure of both reliability and cost. An example is presented in which the safety self-information is used as a decision criterion in a mobile robot controller. The safety self-information is shown to be consistent with the entropy-based Theory of Intelligent Machines defined by Saridis.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Prioritization Methodology for Chemical Replacement
NASA Technical Reports Server (NTRS)
Cruit, W.; Schutzenhofer, S.; Goldberg, B.; Everhart, K.
1993-01-01
This project serves to define an appropriate methodology for effective prioritization of efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semiquantitative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weigh the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results are being implemented as a guideline for consideration for current NASA propulsion systems.
Automatic segmentation of pulmonary fissures in x-ray CT images using anatomic guidance
NASA Astrophysics Data System (ADS)
Ukil, Soumik; Sonka, Milan; Reinhardt, Joseph M.
2006-03-01
The pulmonary lobes are the five distinct anatomic divisions of the human lungs. The physical boundaries between the lobes are called the lobar fissures. Detection of lobar fissure positions in pulmonary X-ray CT images is of increasing interest for the early detection of pathologies, and also for the regional functional analysis of the lungs. We have developed a two-step automatic method for the accurate segmentation of the three pulmonary fissures. In the first step, an approximation of the actual fissure locations is made using a 3-D watershed transform on the distance map of the segmented vasculature. Information from the anatomically labeled human airway tree is used to guide the watershed segmentation. These approximate fissure boundaries are then used to define the region of interest (ROI) for a more exact 3-D graph search to locate the fissures. Within the ROI the fissures are enhanced by computing a ridgeness measure, and this is used as the cost function for the graph search. The fissures are detected as the optimal surface within the graph defined by the cost function, which is computed by transforming the problem to the problem of finding a minimum s-t cut on a derived graph. The accuracy of the lobar borders is assessed by comparing the automatic results to manually traced lobe segments. The mean distance error between manually traced and computer detected left oblique, right oblique and right horizontal fissures is 2.3 +/- 0.8 mm, 2.3 +/- 0.7 mm and 1.0 +/- 0.1 mm, respectively.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
NASA Technical Reports Server (NTRS)
1979-01-01
Cost data generated for the evolutionary power module concepts selected are reported. The initial acquisition costs (design, development, and protoflight unit test costs) were defined and modeled for the baseline 25 kW power module configurations. By building a parametric model of this initial building block, the cost of the 50 kW and the 100 kW power modules were derived by defining only their configuration and programmatic differences from the 25 kW baseline module. Variations in cost for the quantities needed to fulfill the mission scenarios were derived by applying appropriate learning curves.
Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks
Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.
2017-01-01
Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370
Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.
2011-01-01
This Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept document was developed as a first step in developing the Component-Level Electronic-Assembly Repair (CLEAR) System Architecture (NASA/TM-2011-216956). The CLEAR operational concept defines how the system will be used by the Constellation Program and what needs it meets. The document creates scenarios for major elements of the CLEAR architecture. These scenarios are generic enough to apply to near-Earth, Moon, and Mars missions. The CLEAR operational concept involves basic assumptions about the overall program architecture and interactions with the CLEAR system architecture. The assumptions include spacecraft and operational constraints for near-Earth orbit, Moon, and Mars missions. This document addresses an incremental development strategy where capabilities evolve over time, but it is structured to prevent obsolescence. The approach minimizes flight hardware by exploiting Internet-like telecommunications that enables CLEAR capabilities to remain on Earth and to be uplinked as needed. To minimize crew time and operational cost, CLEAR exploits offline development and validation to support online teleoperations. Operational concept scenarios are developed for diagnostics, repair, and functional test operations. Many of the supporting functions defined in these operational scenarios are further defined as technologies in NASA/TM-2011-216956.
NASA Astrophysics Data System (ADS)
Kim, Sojeong; Choi, Soo-Hyung; Lee, Won Bo
Anion exchange membranes(AEMs) have been widely studied due to their various applications, especially for Fuel cells. Previous proton exchange membranes(PEMs), such as Nafions® have better conductivity than AEMs so far. However, technical limitations such as slow electrode kinetics, carbon monoxide (CO) poisoning of metal catalysts, high methanol crossover and high cost of Pt-based catalyst detered further usages. AEMs have advantages to supplement its drawbacks. AEMs are environmentally friendly and cost-efficient. Based on the well-defined block copolymer, self-assembled morphology is expected to have some relationship with its ionic conductivity. Recently AEMs based on various cations, including ammonium, phosphonium, guanidinium, imidazolium, metal cation, and benzimidazolium cations have been developed and extensively studied with the aim to prepare high- performance AEMs. But more fundamental approach, such as relationships between nanostructure and conductivity is needed. We use well-defined block copolymer Poly(styrene-block-isoprene) as a backbone which is synthesized by anionic polymerization. Then we graft various cationic functional groups and analysis the relation between morphology and conductivity. Theoretical and computational soft matter lab.
Wheel configurations for combined energy storage and attitude control systems
NASA Technical Reports Server (NTRS)
Oglevie, R. E.
1985-01-01
Integrated power and attitude control system (IPACS) studies performed over a decade ago established the feasibility of simultaneously storing electrical energy in wheels and utilizing the resulting momentum for spacecraft attitude control. It was shown that such a system possessed many advantages over other contemporary energy storage and attitude control systems in many applications. More recent technology advances in composite rotors, magnetic bearings, and power control electronics have triggered new optimism regarding the feasibility and merits of such a system. This paper presents the results of a recent study whose focus was to define an advanced IPACS and to evaluate its merits for the Space Station application. Emphasis is given to the selection of the wheel configuration to perform the combined functions. A component design concept is developed to establish the system performance capability. A system-level trade study, including life-cycle costing, is performed to define the merits of the system relative to two other candidate systems. It is concluded that an advanced IPACS concept is not only feasible but offers substantial savings in mass and life-cycle cost.
Defining Operational Space Suit Requirements for Commercial Orbital Spaceflight
NASA Technical Reports Server (NTRS)
Alpert, Brian K.
2015-01-01
As the commercial spaceflight industry transitions from suborbital brevity to orbital outposts, spacewalking will become a major consideration for tourists, scientists, and hardware providers. The challenge exists to develop a space suit designed for the orbital commercial spaceflight industry. The unique needs and requirements of this industry will drive space suit designs and costs that are unlike any existing product. Commercial space tourists will pay for the experience of a lifetime, while scientists may not be able to rely on robotics for all operations and external hardware repairs. This study was aimed at defining space suit operational and functional needs across the spectrum of spacewalk elements, identifying technical design drivers and establishing appropriate options. Recommendations from the analysis are offered for consideration
NASA Astrophysics Data System (ADS)
Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof
2013-06-01
Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.
NASA Astrophysics Data System (ADS)
Doroshin, Ivan; Diakonova, Sophia; Sharapova, Elena
2017-10-01
In the conditions of dynamic innovative development of the enterprises of transport branch along with maintaining economic instability it has become necessary to consider not only traditional factors at assessment of efficiency of activity and cost of business, but also consider business reputation factor, which depends on the level of innovative development of the enterprise and generates effect of synergy. Paper considers the concept of synergistic effect. Classification of types of synergistic effects is cited. Estimation procedure for influence of the technical level and innovative development of transport enterprises on the value of business reputation is considered. Functional dependence of the cost of business reputation on the innovative development and synergistic effect is defined.
Trade-space Analysis for Constellations
NASA Astrophysics Data System (ADS)
Le Moigne, J.; Dabney, P.; de Weck, O. L.; Foreman, V.; Grogan, P.; Holland, M. P.; Hughes, S. P.; Nag, S.
2016-12-01
Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: "How many spacecraft should be included in the constellation? Which design has the best cost/risk value?" The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time. This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance. TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, J. Richard; Lamers, Patrick; Roni, Mohammad S.
Logistical barrier are tied to feedstock harvesting, collection, storage and distribution. Current crop harvesting machinery is unable to selectively harvest preferred components of cellulosic biomass while maintaining acceptable levels of soil carbon and minimizing erosion. Actively managing biomass variability imposes additional functional requirements on biomass harvesting equipment. A physiological variation in biomass arises from differences in genetics, degree of crop maturity, geographical location, climatic events, and harvest methods. This variability presents significant cost and performance risks for bioenergy systems. Currently, processing standards and specifications for cellulosic feedstocks are not as well-developed as for mature commodities. Biomass that is stored withmore » high moisture content or exposed to moisture during storage is susceptible to spoilage, rotting, spontaneous combustion, and odor problems. Appropriate storage methods and strategies are needed to better define storage requirements to preserve the volume and quality of harvested biomass over time and maintain its conversion yield. Raw herbaceous biomass is costly to collect, handle, and transport because of its low density and fibrous nature. Existing conventional, bale-based handling equipment and facilities cannot cost-effectively deliver and store high volumes of biomass, even with improved handling techniques. Current handling and transportation systems designed for moving woodchips can be inefficient for bioenergy processes due to the costs and challenges of transporting, storing, and drying high-moisture biomass. The infrastructure for feedstock logistics has not been defined for the potential variety of locations, climates, feedstocks, storage methods, processing alternatives, etc., which will occur at a national scale. When setting up biomass fuel supply chains, for large-scale biomass systems, logistics are a pivotal part in the system. Various studies have shown that long-distance international transport by ship is feasible in terms of energy use and transportation costs, but availability of suitable vessels and meteorological conditions (e.g., winter time in Scandinavia and Russia) need to be considered. However, local transportation by truck (both in biomass exporting and importing countries) may be a high-cost factor, which can influence the overall energy balance and total biomass costs.« less
Schutzer, Matthew E; Arthur, Douglas W; Anscher, Mitchell S
2016-05-01
Value in health care is defined as outcomes achieved per dollar spent, and understanding cost is critical to delivering high-value care. Traditional costing methods reflect charges rather than fundamental costs to provide a service. The more rigorous method of time-driven activity-based costing was used to compare cost between whole-breast radiotherapy (WBRT) and accelerated partial-breast irradiation (APBI) using balloon-based brachytherapy. For WBRT (25 fractions with five-fraction boost) and APBI (10 fractions twice daily), process maps were created outlining each activity from consultation to post-treatment follow up. Through staff interviews, time estimates were obtained for each activity. The capacity cost rates (CCR), defined as cost per minute, were calculated for personnel, equipment, and physical space. Total cost was calculated by multiplying the time required of each resource by its CCR. This was then summed and combined with cost of consumable materials. The total cost for WBRT was $5,333 and comprised 56% personnel costs and 44% space/equipment costs. For APBI, the total cost was $6,941 (30% higher than WBRT) and comprised 51% personnel costs, 6% space/equipment costs, and 43% consumable materials costs. The attending physician had the highest CCR of all personnel ($4.28/min), and APBI required 24% more attending time than WBRT. The most expensive activity for APBI was balloon placement and for WBRT was computed tomography simulation. APBI cost more than WBRT when using the dose/fractionation schemes analyzed. Future research should use time-driven activity-based costing to better understand cost with the aim of reducing expenditure and defining bundled payments. Copyright © 2016 by American Society of Clinical Oncology.
Shin, Hosung; Lee, Suehyung; Kim, Jong Soo; Kim, Jinsuk; Han, Kyu Hong
2010-07-01
This study estimated the annual socioeconomic costs of food-borne disease in 2008 from a societal perspective and using a cost-of-illness method. Our model employed a comprehensive set of diagnostic disease codes to define food-borne diseases with using the Korea National Health Insurance (KNHI) reimbursement data. This study classified the food borne illness as three types of symptoms according to the severity of the illness: mild, moderate, severe. In addition to the traditional method of assessing the cost-of-illness, the study included measures to account for the lost quality of life. We estimated the cost of the lost quality of life using quality-adjusted life years and a visual analog scale. The direct cost included medical and medication costs, and the non-medical costs included transportation costs, caregiver's cost and administration costs. The lost productivity costs included lost workdays due to illness and lost earnings due to premature death. The study found the estimated annual socioeconomic costs of food-borne disease in 2008 were 954.9 billion won (735.3 billion won-996.9 billion won). The medical cost was 73.4 - 76.8% of the cost, the lost productivity cost was 22.6% and the cost of the lost quality of life was 26.0%. Most of the cost-of-illness studies are known to have underestimated the actual socioeconomic costs of the subjects, and these studies excluded many important social costs, such as the value of pain, suffering and functional disability. The study addressed the uncertainty related to estimating the socioeconomic costs of food-borne disease as well as the updated cost estimates. Our estimates could contribute to develop and evaluate policies for food-borne disease.
Berry, Colin; Zimmerli, Lukas U; Steedman, Tracey; Foster, John E; Dargie, Henry J; Berg, Geoffrey A; Dominiczak, Anna F; Delles, Christian
2008-03-01
Morbidity following CABG (coronary artery bypass grafting) is difficult to predict and leads to increased healthcare costs. We hypothesized that pre-operative CMR (cardiac magnetic resonance) findings would predict resource utilization in elective CABG. Over a 12-month period, patients requiring elective CABG were invited to undergo CMR 1 day prior to CABG. Gadolinium-enhanced CMR was performed using a trueFISP inversion recovery sequence on a 1.5 tesla scanner (Sonata; Siemens). Clinical data were collected prospectively. Admission costs were quantified based on standardized actual cost/day. Admission cost greater than the median was defined as 'increased'. Of 458 elective CABG cases, 45 (10%) underwent pre-operative CMR. Pre-operative characteristics [mean (S.D.) age, 64 (9) years, mortality (1%) and median (interquartile range) admission duration, 7 (6-8) days] were similar in patients who did or did not undergo CMR. In the patients undergoing CMR, eight (18%) and 11 (24%) patients had reduced LV (left ventricular) systolic function by CMR [LVEF (LV ejection fraction) <55%] and echocardiography respectively. LE (late enhancement) with gadolinium was detected in 17 (38%) patients. The average cost/day was $2723. The median (interquartile range) admission cost was $19059 ($10891-157917). CMR LVEF {OR (odds ratio), 0.93 [95% CI (confidence interval), 0.87-0.99]; P=0.03} and SV (stroke volume) index [OR 1.07 (95% CI, 1.00-1.14); P=0.02] predicted increased admission cost. CMR LVEF (P=0.08) and EuroScore tended to predict actual admission cost (P=0.09), but SV by CMR (P=0.16) and LV function by echocardiography (P=0.95) did not. In conclusion, in this exploratory investigation, pre-operative CMR findings predicted admission duration and increased admission cost in elective CABG surgery. The cost-effectiveness of CMR in risk stratification in elective CABG surgery merits prospective assessment.
Nathan, Dominic E; Johnson, Michelle J; McGuire, John R
2009-01-01
Hand and arm impairment is common after stroke. Robotic stroke therapy will be more effective if hand and upper-arm training is integrated to help users practice reaching and grasping tasks. This article presents the design, development, and validation of a low-cost, functional electrical stimulation grasp-assistive glove for use with task-oriented robotic stroke therapy. Our glove measures grasp aperture while a user completes simple-to-complex real-life activities, and when combined with an integrated functional electrical stimulator, it assists in hand opening and closing. A key function is a new grasp-aperture prediction model, which uses the position of the end-effectors of two planar robots to define the distance between the thumb and index finger. We validated the accuracy and repeatability of the glove and its capability to assist in grasping. Results from five nondisabled subjects indicated that the glove is accurate and repeatable for both static hand-open and -closed tasks when compared with goniometric measures and for dynamic reach-to-grasp tasks when compared with motion analysis measures. Results from five subjects with stroke showed that with the glove, they could open their hands but without it could not. We present a glove that is a low-cost solution for in vivo grasp measurement and assistance.
Workforce flexibility - in defence of professional healthcare work.
Wise, Sarah; Duffield, Christine; Fry, Margaret; Roche, Michael
2017-06-19
Purpose The desirability of having a more flexible workforce is emphasised across many health systems yet this goal is as ambiguous as it is ubiquitous. In the absence of empirical studies in healthcare that have defined flexibility as an outcome, the purpose of this paper is to draw on classic management and sociological theory to reduce this ambiguity. Design/methodology/approach The paper uses the Weberian tool of "ideal types". Key workforce reforms are held against Atkinson's model of functional flexibility which aims to increase responsiveness and adaptability through multiskilling, autonomy and teams; and Taylorism which seeks stability and reduced costs through specialisation, fragmentation and management control. Findings Appeals to an amorphous goal of increasing workforce flexibility make an assumption that any reform will increase flexibility. However, this paper finds that the work of healthcare professionals already displays most of the essential features of functional flexibility but many widespread reforms are shifting healthcare work in a Taylorist direction. This contradiction is symptomatic of a failure to confront inevitable trade-offs in reform: between the benefits of specialisation and the costs of fragmentation; and between management control and professional autonomy. Originality/value The paper questions the conventional conception of "the problem" of workforce reform as primarily one of professional control over tasks. Holding reforms against the ideal types of Taylorism and functional flexibility is a simple, effective way the costs and benefits of workforce reform can be revealed.
Curtis, Jeffrey R; Schabert, Vernon F; Harrison, David J; Yeaw, Jason; Korn, Jonathan R; Quach, Caroleen; Yun, Huifeng; Joseph, George J; Collier, David H
2014-07-01
The aim of this analysis was to implement a claims-based algorithm to estimate biologic cost per effectively treated patient for biologics approved for moderate to severe rheumatoid arthritis (RA). This retrospective analysis included commercially insured adults (aged 18-63 years) with RA in a commercial database, who initiated biologic treatment with abatacept, adalimumab, etanercept, golimumab, or infliximab between 2007 and 2010. The algorithm defined effectiveness as having all of the following: high adherence, no biologic dose increase, no biologic switching, no new nonbiologic disease-modifying antirheumatic drug, no increased or new oral glucocorticoid use, and no more than 1 glucocorticoid injection. For each biologic, cost per effectively treated patient was defined as total drug and administration costs (from allowed amounts on claims), divided by the number of patients categorized as effectively treated. Of 15,351 patients, 12,018 (78.3%) were women, and the mean (SD) age was 49.7 (9.6) years. The algorithm categorized treatment as effective in the first year for 30% (1899/6374) of etanercept, 30% (1396/4661) of adalimumab, 20% (560/2765) of infliximab, 27% (361/1338) of abatacept, and 29% (62/213) of golimumab treated patients. The 1-year biologic cost per effectively treated patient, as defined by the algorithm, was nominally lower for subcutaneously injected biologics than for infused biologics. The 1-year biologic cost per effectively treated patient, as defined by the algorithm, was lowest for etanercept ($49,952), followed by golimumab ($50,189), adalimumab ($52,858), abatacept ($71,866), and infliximab ($104,333). Algorithm-defined effectiveness was similar for biologics other than infliximab. The 1-year biologic cost per effectively treated patient, as defined by the algorithm, was nominally lower for subcutaneously injected biologics than for infused biologics. Copyright © 2014 Elsevier HS Journals, Inc. All rights reserved.
Prioritization methodology for chemical replacement
NASA Technical Reports Server (NTRS)
Goldberg, Ben; Cruit, Wendy; Schutzenhofer, Scott
1995-01-01
This methodology serves to define a system for effective prioritization of efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semi quantitative approach derived from quality function deployment techniques (QFD Matrix). QFD is a conceptual map that provides a method of transforming customer wants and needs into quantitative engineering terms. This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives.
Ship Production Symposium Held in New Orleans, Louisiana on August 26-28, 1987
1987-08-01
shape of the ship with defined functions and performance . Examine with what materials, equipment, and methods a ship can be built inexpensively and...client who places an order with a ship - yard with this kind of spirit and pro- duction system has great assurance for timely and quality performance ...the low cost of Japanese ships was caused by cheap labor. But, today Japan is one of the countries with a high wage level. Can the shipbuilding
Redox Bulk Energy Storage System Study, Volume 1
NASA Technical Reports Server (NTRS)
Ciprios, G.; Erskine, W., Jr.; Grimes, P. G.
1977-01-01
Opportunities were found for electrochemical energy storage devices in the U.S. electric utility industry. Application requirements for these devices were defined, including techno-economic factors. A new device, the Redox storage battery was analyzed. The Redox battery features a decoupling of energy storage and power conversion functions. General computer methods were developed to simulate Redox system operations. These studies showed that the Redox system is potentially attractive if certain performance goals can be achieved. Pathways for reducing the cost of the Redox system were identified.
1991-05-01
or may not bypass the editing function. At present, editing rules beyond those required for translation have not been stipulated. 2When explicit... editing rules become defined, the editor at a site LGN may perform two levels of edit checking: warning, which would insert blanks or pass as submitted...position image transactions into a transaction set. This low-level edit checking is performed at the site LGN to reduce transmission costs and to
2013-04-01
Necrotizing Ulcerative DOI40/D4341 Gingivitis Oral Infection or Abscess DOI40/D7510 of Undetermined Origin Other Orofacial Pain DO 140/D91 10...10.7205/MILMED-D-12-00431 MILITARY MEDICINE, Vol. 178, April 2013 METHODS Dental Emergencies DE care is designed to relieve oral pain , eliminate...by the authors into three subsets: severe, moderately severe, and pain /loss of function (see Table I). The severe category was defined as DEs
Photothermal characterization of encapsulant materials for photovoltaic modules
NASA Technical Reports Server (NTRS)
Liang, R. H.; Gupta, A.; Distefano, S.
1982-01-01
A photothermal test matrix and a low cost testing apparatus for encapsulant materials of photovoltaic modules were defined. Photothermal studies were conducted to screen and rank existing as well as future encapsulant candidate materials and/or material formulations in terms of their long term physiochemical stability under accelerated photothermal aging conditions. Photothermal characterization of six candidate pottant materials and six candidate outer cover materials were carried out. Principal products of photothermal degradation are identified. Certain critical properties are also monitored as a function of photothermal aging.
Erin L. Landguth,; Muhlfeld, Clint C.; Luikart, Gordon
2012-01-01
We introduce Cost Distance FISHeries (CDFISH), a simulator of population genetics and connectivity in complex riverscapes for a wide range of environmental scenarios of aquatic organisms. The spatially-explicit program implements individual-based genetic modeling with Mendelian inheritance and k-allele mutation on a riverscape with resistance to movement. The program simulates individuals in subpopulations through time employing user-defined functions of individual migration, reproduction, mortality, and dispersal through straying on a continuous resistance surface.
NASA Technical Reports Server (NTRS)
Cruit, Wendy; Schutzenhofer, Scott; Goldberg, Ben; Everhart, Kurt
1993-01-01
This project served to define an appropriate methodology for effective prioritization of technology efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semiquantitative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results will be implemented as a guideline for consideration for current NASA propulsion systems.
Emerging technologies for the changing global market
NASA Technical Reports Server (NTRS)
Cruit, Wendy; Schutzenhofer, Scott; Goldberg, Ben; Everhart, Kurt
1993-01-01
This project served to define an appropriate methodology for effective prioritization of technology efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semi-quantative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results will be implemented as a guideline for consideration for current NASA propulsion systems.
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Bel, Arjan
2015-03-01
Incorporating additional guidance information, e.g., landmark/contour correspondence, in deformable image registration is often desirable and is typically done by adding constraints or cost terms to the optimization function. Commonly, deciding between a "hard" constraint and a "soft" additional cost term as well as the weighting of cost terms in the optimization function is done on a trial-and-error basis. The aim of this study is to investigate the advantages of exploiting guidance information by taking a multi-objective optimization perspective. Hereto, next to objectives related to match quality and amount of deformation, we define a third objective related to guidance information. Multi-objective optimization eliminates the need to a-priori tune a weighting of objectives in a single optimization function or the strict requirement of fulfilling hard guidance constraints. Instead, Pareto-efficient trade-offs between all objectives are found, effectively making the introduction of guidance information straightforward, independent of its type or scale. Further, since complete Pareto fronts also contain less interesting parts (i.e., solutions with near-zero deformation effort), we study how adaptive steering mechanisms can be incorporated to automatically focus more on solutions of interest. We performed experiments on artificial and real clinical data with large differences, including disappearing structures. Results show the substantial benefit of using additional guidance information. Moreover, compared to the 2-objective case, additional computational cost is negligible. Finally, with the same computational budget, use of the adaptive steering mechanism provides superior solutions in the area of interest.
Value-based assessment of robotic pancreas and liver surgery
Patti, James C.; Ore, Ana Sofia; Barrows, Courtney; Velanovich, Vic
2017-01-01
Current healthcare economic evaluations are based only on the perspective of a single stakeholder to the healthcare delivery process. A true value-based decision incorporates all of the outcomes that could be impacted by a single episode of surgical care. We define the value proposition for robotic surgery using a stakeholder model incorporating the interests of all groups participating in the provision of healthcare services: patients, surgeons, hospitals and payers. One of the developing and expanding fields that could benefit the most from a complete value-based analysis is robotic hepatopancreaticobiliary (HPB) surgery. While initial robot purchasing costs are high, the benefits over laparoscopic surgery are considerable. Performing a literature search we found a total of 18 economic evaluations for robotic HPB surgery. We found a lack of evaluations that were carried out from a perspective that incorporates all of the impacts of a single episode of surgical care and that included a comprehensive hospital cost assessment. For distal pancreatectomies, the two most thorough examinations came to conflicting results regarding total cost savings compared to laparoscopic approaches. The most thorough pancreaticoduodenectomy evaluation found non-significant savings for total hospital costs. Robotic hepatectomies showed no cost savings over laparoscopic and only modest savings over open techniques. Lastly, robotic cholecystectomies were found to be more expensive than the gold-standard laparoscopic approach. Existing cost accounting data associated with robotic HPB surgery is incomplete and unlikely to reflect the state of this field in the future. Current data combines the learning curves for new surgical procedures being undertaken by HPB surgeons with costs derived from a market dominated by a single supplier of robotic instruments. As a result, the value proposition for stakeholders in this process cannot be defined. In order to solve this problem, future studies must incorporate (I) quality of life, survival, and return to independent function alongside data such as (II) intent-to-treat analysis of minimally-invasive surgery accounting for conversions to open, (III) surgeon and institution experience and operative time as surrogates for the learning curve; and (IV) amortization and maintenance costs as well as direct costs of disposables and instruments. PMID:28848747
Value-based assessment of robotic pancreas and liver surgery.
Patti, James C; Ore, Ana Sofia; Barrows, Courtney; Velanovich, Vic; Moser, A James
2017-08-01
Current healthcare economic evaluations are based only on the perspective of a single stakeholder to the healthcare delivery process. A true value-based decision incorporates all of the outcomes that could be impacted by a single episode of surgical care. We define the value proposition for robotic surgery using a stakeholder model incorporating the interests of all groups participating in the provision of healthcare services: patients, surgeons, hospitals and payers. One of the developing and expanding fields that could benefit the most from a complete value-based analysis is robotic hepatopancreaticobiliary (HPB) surgery. While initial robot purchasing costs are high, the benefits over laparoscopic surgery are considerable. Performing a literature search we found a total of 18 economic evaluations for robotic HPB surgery. We found a lack of evaluations that were carried out from a perspective that incorporates all of the impacts of a single episode of surgical care and that included a comprehensive hospital cost assessment. For distal pancreatectomies, the two most thorough examinations came to conflicting results regarding total cost savings compared to laparoscopic approaches. The most thorough pancreaticoduodenectomy evaluation found non-significant savings for total hospital costs. Robotic hepatectomies showed no cost savings over laparoscopic and only modest savings over open techniques. Lastly, robotic cholecystectomies were found to be more expensive than the gold-standard laparoscopic approach. Existing cost accounting data associated with robotic HPB surgery is incomplete and unlikely to reflect the state of this field in the future. Current data combines the learning curves for new surgical procedures being undertaken by HPB surgeons with costs derived from a market dominated by a single supplier of robotic instruments. As a result, the value proposition for stakeholders in this process cannot be defined. In order to solve this problem, future studies must incorporate (I) quality of life, survival, and return to independent function alongside data such as (II) intent-to-treat analysis of minimally-invasive surgery accounting for conversions to open, (III) surgeon and institution experience and operative time as surrogates for the learning curve; and (IV) amortization and maintenance costs as well as direct costs of disposables and instruments.
The relative cost of children's physical play.
Pellegrini; Horvat; Huberty
1998-04-01
There has been a long-standing debate regarding the functions of play during childhood. An important, but neglected, first step in this debate entails documenting the costs associated with play. In this study we analysed energetic costs (expressed in terms of caloric expenditure) associated with physical play in four field experiments of play in primary school children. Experiment 1 established the concurrent validity of an observational check list to estimate caloric expenditure of children's physical play. Experiment 2 compared caloric expenditure of the play (defined as all behaviour exhibited during play time) for two age groups of children during playtime outdoors and during indoor sedentary activity; caloric expenditure of outdoor activity was greater and was significantly correlated with ambient temperature. In experiment 3, children were observed during indoor play to control for the influence of ambient temperature. Outdoor physical play was more energetically costly than indoor physical play. In experiment 4, children's behaviour was observed outdoors and caloric expenditure for play, games and other activities was compared. Physical play was more costly than other forms of behaviour and games. Estimates of total energetic costs of play ranged from 6 to 15%. Results are discussed in terms of the relatively low caloric costs of play. Copyright 1998 The Association for the Study of Animal Behaviour. Copyright 1998 The Association for the Study of Animal Behaviour.
The Conundrum of Functional Brain Networks: Small-World Efficiency or Fractal Modularity
Gallos, Lazaros K.; Sigman, Mariano; Makse, Hernán A.
2012-01-01
The human brain has been studied at multiple scales, from neurons, circuits, areas with well-defined anatomical and functional boundaries, to large-scale functional networks which mediate coherent cognition. In a recent work, we addressed the problem of the hierarchical organization in the brain through network analysis. Our analysis identified functional brain modules of fractal structure that were inter-connected in a small-world topology. Here, we provide more details on the use of network science tools to elaborate on this behavior. We indicate the importance of using percolation theory to highlight the modular character of the functional brain network. These modules present a fractal, self-similar topology, identified through fractal network methods. When we lower the threshold of correlations to include weaker ties, the network as a whole assumes a small-world character. These weak ties are organized precisely as predicted by theory maximizing information transfer with minimal wiring costs. PMID:22586406
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behling, H.; Behling, K.; Amarasooriya, H.
1995-02-01
A generic difficulty encountered in cost-benefit analyses is the quantification of major elements that define the costs and the benefits in commensurate units. In this study, the costs of making KI available for public use, and the avoidance of thyroidal health effects predicted to be realized from the availability of that KI (i.e., the benefits), are defined in the commensurate units of dollars.
45 CFR 149.115 - Cost threshold and cost limit.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Cost threshold and cost limit. 149.115 Section 149... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.115 Cost threshold and cost limit. The following cost threshold and cost limits apply individually, to each early retiree as defined...
A cost-efficiency and health benefit approach to improve urban air quality.
Miranda, A I; Ferreira, J; Silveira, C; Relvas, H; Duque, L; Roebeling, P; Lopes, M; Costa, S; Monteiro, A; Gama, C; Sá, E; Borrego, C; Teixeira, J P
2016-11-01
When ambient air quality standards established in the EU Directive 2008/50/EC are exceeded, Member States are obliged to develop and implement Air Quality Plans (AQP) to improve air quality and health. Notwithstanding the achievements in emission reductions and air quality improvement, additional efforts need to be undertaken to improve air quality in a sustainable way - i.e. through a cost-efficiency approach. This work was developed in the scope of the recently concluded MAPLIA project "Moving from Air Pollution to Local Integrated Assessment", and focuses on the definition and assessment of emission abatement measures and their associated costs, air quality and health impacts and benefits by means of air quality modelling tools, health impact functions and cost-efficiency analysis. The MAPLIA system was applied to the Grande Porto urban area (Portugal), addressing PM10 and NOx as the most important pollutants in the region. Four different measures to reduce PM10 and NOx emissions were defined and characterized in terms of emissions and implementation costs, and combined into 15 emission scenarios, simulated by the TAPM air quality modelling tool. Air pollutant concentration fields were then used to estimate health benefits in terms of avoided costs (external costs), using dose-response health impact functions. Results revealed that, among the 15 scenarios analysed, the scenario including all 4 measures lead to a total net benefit of 0.3M€·y(-1). The largest net benefit is obtained for the scenario considering the conversion of 50% of open fire places into heat recovery wood stoves. Although the implementation costs of this measure are high, the benefits outweigh the costs. Research outcomes confirm that the MAPLIA system is useful for policy decision support on air quality improvement strategies, and could be applied to other urban areas where AQP need to be implemented and monitored. Copyright © 2016. Published by Elsevier B.V.
Marriott, John; Graham-Clarke, Emma; Shirley, Debra; Rushton, Alison
2018-01-01
Objective To evaluate the clinical and cost-effectiveness of non-medical prescribing (NMP). Design Systematic review. Two reviewers independently completed searches, eligibility assessment and assessment of risk of bias. Data sources Pre-defined search terms/combinations were utilised to search electronic databases. In addition, hand searches of reference lists, key journals and grey literature were employed alongside consultation with authors/experts. Eligibility criteria for included studies Randomised controlled trials (RCTs) evaluating clinical or cost-effectiveness of NMP. Measurements reported on one or more outcome(s) of: pain, function, disability, health, social impact, patient-safety, costs-analysis, quality adjusted life years (QALYs), patient satisfaction, clinician perception of clinical and functional outcomes. Results Three RCTs from two countries were included (n = 932 participants) across primary and tertiary care settings. One RCT was assessed as low risk of bias, one as high risk of bias and one as unclear risk of bias. All RCTs evaluated clinical effectiveness with one also evaluating cost-effectiveness. Clinical effectiveness was evaluated using a range of safety and patient-reported outcome measures. Participants demonstrated significant improvement in outcomes when receiving NMP compared to treatment as usual (TAU) in all RCTs. An associated cost analysis showed NMP to be more expensive than TAU (regression coefficient p = 0.0000), however experimental groups generated increased QALYs compared to TAU. Conclusion Limited evidence with overall unclear risk of bias exists evaluating clinical and cost-effectiveness of NMP across all professions and clinical settings. GRADE assessment revealed moderate quality evidence. Evidence suggests that NMP is safe and can provide beneficial clinical outcomes. Benefits to the health economy remain unclear, with the cost-effectiveness of NMP assessed by a single pilot RCT of low risk of bias. Adequately powered low risk of bias RCTs evaluating clinical and cost effectiveness are required to evaluate NMP across clinical specialities, professions and settings. Registration PROSPERO (CRD42015017212). PMID:29509763
The Economics of NASA Mission Cost Reserves
NASA Technical Reports Server (NTRS)
Whitley, Sally; Shinn, Stephen
2012-01-01
Increases in NASA mission costs are well-noted but not well-understood, and there is little evidence that they are decreasing in frequency or amount over time. The need to control spending has led to analysis of the causes and magnitude of historical mission overruns, and many program control efforts are being implemented to attempt to prevent or mitigate the problem (NPR 7120). However, cost overruns have not abated, and while some direct causes of increased spending may be obvious (requirements creep, launch delays, directed changes, etc.), the underlying impetus to spend past the original budget may be more subtle. Gaining better insight into the causes of cost overruns will help NASA and its contracting organizations to avoid .them. This paper hypothesizes that one cause of NASA mission cost overruns is that the availability of reserves gives project team members an incentive to make decisions and behave in ways that increase costs. We theorize that the presence of reserves is a contributing factor to cost overruns because it causes organizations to use their funds less efficiently or to control spending less effectively. We draw a comparison to the insurance industry concept of moral hazard, the phenomenon that the presence of insurance causes insureds to have more frequent and higher insurance losses, and we attempt to apply actuarial techniques to quantifY the increase in the expected cost of a mission due to the availability of reserves. We create a theoretical model of reserve spending motivation by defining a variable ReserveSpending as a function of total reserves. This function has a positive slope; for every dollar of reserves available, there is a positive probability of spending it. Finally, the function should be concave down; the probability of spending each incremental dollar of reserves decreases progressively. We test the model against available NASA CADRe data by examining missions with reserve dollars initially available and testing whether they are more likely to spend those dollars, and whether larger levels of reserves lead to higher cost overruns. Finally, we address the question of how to prevent reserves from increasing mission spending without increasing cost risk to projects budgeted without any reserves. Is there a "sweet spot"? How can we derive the maximum benefit associated with risk reduction from reserves while minimizing the effects of reserve spending motivation?
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
SCM: A method to improve network service layout efficiency with network evolution.
Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.
Gallos, Lazaros K; Makse, Hernán A; Sigman, Mariano
2012-02-21
The human brain is organized in functional modules. Such an organization presents a basic conundrum: Modules ought to be sufficiently independent to guarantee functional specialization and sufficiently connected to bind multiple processors for efficient information transfer. It is commonly accepted that small-world architecture of short paths and large local clustering may solve this problem. However, there is intrinsic tension between shortcuts generating small worlds and the persistence of modularity, a global property unrelated to local clustering. Here, we present a possible solution to this puzzle. We first show that a modified percolation theory can define a set of hierarchically organized modules made of strong links in functional brain networks. These modules are "large-world" self-similar structures and, therefore, are far from being small-world. However, incorporating weaker ties to the network converts it into a small world preserving an underlying backbone of well-defined modules. Remarkably, weak ties are precisely organized as predicted by theory maximizing information transfer with minimal wiring cost. This trade-off architecture is reminiscent of the "strength of weak ties" crucial concept of social networks. Such a design suggests a natural solution to the paradox of efficient information flow in the highly modular structure of the brain.
Gallos, Lazaros K.; Makse, Hernán A.; Sigman, Mariano
2012-01-01
The human brain is organized in functional modules. Such an organization presents a basic conundrum: Modules ought to be sufficiently independent to guarantee functional specialization and sufficiently connected to bind multiple processors for efficient information transfer. It is commonly accepted that small-world architecture of short paths and large local clustering may solve this problem. However, there is intrinsic tension between shortcuts generating small worlds and the persistence of modularity, a global property unrelated to local clustering. Here, we present a possible solution to this puzzle. We first show that a modified percolation theory can define a set of hierarchically organized modules made of strong links in functional brain networks. These modules are “large-world” self-similar structures and, therefore, are far from being small-world. However, incorporating weaker ties to the network converts it into a small world preserving an underlying backbone of well-defined modules. Remarkably, weak ties are precisely organized as predicted by theory maximizing information transfer with minimal wiring cost. This trade-off architecture is reminiscent of the “strength of weak ties” crucial concept of social networks. Such a design suggests a natural solution to the paradox of efficient information flow in the highly modular structure of the brain. PMID:22308319
Biological Implications of Dynamical Phases in Non-equilibrium Networks
NASA Astrophysics Data System (ADS)
Murugan, Arvind; Vaikuntanathan, Suriyanarayanan
2016-03-01
Biology achieves novel functions like error correction, ultra-sensitivity and accurate concentration measurement at the expense of free energy through Maxwell Demon-like mechanisms. The design principles and free energy trade-offs have been studied for a variety of such mechanisms. In this review, we emphasize a perspective based on dynamical phases that can explain commonalities shared by these mechanisms. Dynamical phases are defined by typical trajectories executed by non-equilibrium systems in the space of internal states. We find that coexistence of dynamical phases can have dramatic consequences for function vs free energy cost trade-offs. Dynamical phases can also provide an intuitive picture of the design principles behind such biological Maxwell Demons.
On the Impact of Local Taxes in a Set Cover Game
NASA Astrophysics Data System (ADS)
Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme
Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.
[Development of opened instrument for generating and measuring physiological signal].
Chen, Longcong; Hu, Guohu; Gao, Bin
2004-12-01
An opened instrument with liquid crystal display (LCD) for generating and measuring physiological signal is introduced in this paper. Based on a single-chip microcomputer. the instrument uses the technique of LCD screen to display signal wave and information, and it realizes man-machine interaction by keyboard. This instrument can produce not only defined signal in common use by utilizing important saved data and relevant arithmetic, but also user-defined signal. Therefore, it is open to produce signal. In addition, this instrument has strong extension because of its modularized design as computer, which has much function such as displaying, measuring and saving physiological signal, and many features such as low power consumption, small volume, low cost and portability. Hence this instrument is convenient for experiment teaching, clinic examining, maintaining of medical instrument.
Platinum-Based Nanocages with Subnanometer-Thick Walls and Well-Defined Facets
Zhang, Lei; Wang, Xue; Chi, Miaofang; ...
2015-07-24
A cost-effective catalyst should have a high dispersion of the active atoms, together with a controllable surface structure for the optimization of activity, selectivity, or both. We fabricated nanocages by depositing a few atomic layers of platinum (Pt) as conformal shells on palladium (Pd) nanocrystals with well-defined facets and then etching away the Pd templates. Density functional theory calculations suggest that the etching is initiated via a mechanism that involves the formation of vacancies through the removal of Pd atoms incorporated into the outermost layer during the deposition of Pt. With the use of Pd nanoscale cubes and octahedra asmore » templates, we obtained Pt cubic and octahedral nanocages enclosed by {100} and {111} facets, respectively, which exhibited distinctive catalytic activities toward oxygen reduction.« less
2017-01-01
In an ideal plasmonic surface sensor, the bioactive area, where analytes are recognized by specific biomolecules, is surrounded by an area that is generally composed of a different material. The latter, often the surface of the supporting chip, is generally hard to be selectively functionalized, with respect to the active area. As a result, cross talks between the active area and the surrounding one may occur. In designing a plasmonic sensor, various issues must be addressed: the specificity of analyte recognition, the orientation of the immobilized biomolecule that acts as the analyte receptor, and the selectivity of surface coverage. The objective of this tutorial review is to introduce the main rational tools required for a correct and complete approach to chemically functionalize plasmonic surface biosensors. After a short introduction, the review discusses, in detail, the most common strategies for achieving effective surface functionalization. The most important issues, such as the orientation of active molecules and spatial and chemical selectivity, are considered. A list of well-defined protocols is suggested for the most common practical situations. Importantly, for the reported protocols, we also present direct comparisons in term of costs, labor demand, and risk vs benefit balance. In addition, a survey of the most used characterization techniques necessary to validate the chemical protocols is reported. PMID:28796479
DRG migration: A novel measure of inefficient surgical care in a value-based world.
Hughes, Byron D; Mehta, Hemalkumar B; Sieloff, Eric; Shan, Yong; Senagore, Anthony J
2018-03-01
Diagnosis-Related Group (DRG) migration, DRG 331 to 330, is defined by the assignment to a higher cost DRG due only to post admission comorbidity or complications (CC). We assessed the 5% national Medicare data set (2011-2014) for colectomy (DRG's 331/330), excluding present on admission CC's and selecting patients with one or more CC's post-admission to define the impact on payments, cost, and length of stay (LOS). The incidence of DRG migration was 14.2%. This was associated with statistically significant increases in payments, hospital cost, and LOS compared to DRG 331 patients. When DRG migration rate was extrapolated to the entire at risk population, the results were an increase of Centers for Medicare and Medicaid Services (CMS) cost by $98 million, hospital cost by $418 million, and excess hospital days equaling 68,669 days. These negative outcomes represent potentially unnecessary variations in the processes of care, and therefore a unique economic concept defining inefficient surgical care. Copyright © 2017 Elsevier Inc. All rights reserved.
Digitizing dissertations for an institutional repository: a process and cost analysis.
Piorun, Mary; Palmer, Lisa A
2008-07-01
This paper describes the Lamar Soutter Library's process and costs associated with digitizing 300 doctoral dissertations for a newly implemented institutional repository at the University of Massachusetts Medical School. Project tasks included identifying metadata elements, obtaining and tracking permissions, converting the dissertations to an electronic format, and coordinating workflow between library departments. Each dissertation was scanned, reviewed for quality control, enhanced with a table of contents, processed through an optical character recognition function, and added to the institutional repository. Three hundred and twenty dissertations were digitized and added to the repository for a cost of $23,562, or $0.28 per page. Seventy-four percent of the authors who were contacted (n = 282) granted permission to digitize their dissertations. Processing time per title was 170 minutes, for a total processing time of 906 hours. In the first 17 months, full-text dissertations in the collection were downloaded 17,555 times. Locally digitizing dissertations or other scholarly works for inclusion in institutional repositories can be cost effective, especially if small, defined projects are chosen. A successful project serves as an excellent recruitment strategy for the institutional repository and helps libraries build new relationships. Challenges include workflow, cost, policy development, and copyright permissions.
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
European Union-28: An annualised cost-of-illness model for venous thromboembolism.
Barco, Stefano; Woersching, Alex L; Spyropoulos, Alex C; Piovella, Franco; Mahan, Charles E
2016-04-01
Annual costs for venous thromboembolism (VTE) have been defined within the United States (US) demonstrating a large opportunity for cost savings. Costs for the European Union-28 (EU-28) have never been defined. A literature search was conducted to evaluate EU-28 cost sources. Median costs were defined for each cost input and costs were inflated to 2014 Euros (€) in the study country and adjusted for Purchasing Power Parity between EU countries. Adjusted costs were used to populate previously published cost-models based on adult incidence-based events. In the base model, annual expenditures for total, hospital-associated, preventable, and indirect costs were €1.5-2.2 billion, €1.0-1.5 billion, €0.5-1.1 billion and €0.2-0.3 billion, respectively (indirect costs: 12 % of expenditures). In the long-term attack rate model, total, hospital-associated, preventable, and indirect costs were €1.8-3.3 billion, €1.2-2.4 billion, €0.6-1.8 billion and €0.2-0.7 billion (indirect costs: 13 % of expenditures). In the multiway sensitivity analysis, annual expenditures for total, hospital-associated, preventable, and indirect costs were €3.0-8.5 billion, €2.2-6.2 billion, €1.1-4.6 billion and €0.5-1.4 billion (indirect costs: 22 % of expenditures). When the value of a premature life-lost increased slightly, aggregate costs rose considerably since these costs are higher than the direct medical costs. When evaluating the models aggregately for costs, the results suggests total, hospital-associated, preventable, and indirect costs ranging from €1.5-13.2 billion, €1.0-9.7 billion, €0.5-7.3 billion and €0.2-6.1 billion, respectively. Our study demonstrates that VTE costs have a large financial impact upon the EU-28's healthcare systems and that significant savings could be realised if better preventive measures are applied.
From portable dialysis to a bioengineered kidney.
van Gelder, Maaike K; Mihaila, Silvia M; Jansen, Jitske; Wester, Maarten; Verhaar, Marianne C; Joles, Jaap A; Stamatialis, Dimitrios; Masereeuw, Roos; Gerritsen, Karin G F
2018-05-01
Since the advent of peritoneal dialysis (PD) in the 1970s, the principles of dialysis have changed little. In the coming decades, several major breakthroughs are expected. Areas covered: Novel wearable and portable dialysis devices for both hemodialysis (HD) and PD are expected first. The HD devices could facilitate more frequent and longer dialysis outside of the hospital, while improving patient's mobility and autonomy. The PD devices could enhance blood purification and increase technique survival of PD. Further away from clinical application is the bioartificial kidney, containing renal cells. Initially, the bioartificial kidney could be applied for extracorporeal treatment, to partly replace renal tubular endocrine, metabolic, immunoregulatory and secretory functions. Subsequently, intracorporeal treatment may become possible. Expert commentary: Key factors for successful implementation of miniature dialysis devices are patient attitudes and cost-effectiveness. A well-functioning and safe extracorporeal blood circuit is required for HD. For PD, a double lumen PD catheter would optimize performance. Future research should focus on further miniaturization of the urea removal strategy. For the bio-artificial kidney (BAK), cost effectiveness should be determined and a general set of functional requirements should be defined for future studies. For intracorporeal application, water reabsorption will become a major challenge.
Trade-Space Analysis Tool for Constellations (TAT-C)
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Dabney, Philip; de Weck, Olivier; Foreman, Veronica; Grogan, Paul; Holland, Matthew; Hughes, Steven; Nag, Sreeja
2016-01-01
Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: How many spacecraft should be included in the constellation? Which design has the best costrisk value? The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time.This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit Coverage, Reduction Metrics, and Cost Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance.TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.
Burton, R; Mauk, D
1993-03-01
By integrating customer satisfaction planning and industrial engineering techniques when examining internal costs and efficiencies, materiel managers are able to better realize what concepts will best meet their customers' needs. Defining your customer(s), applying industrial engineering techniques, completing work sampling studies, itemizing recommendations and benefits to each alternative, performing feasibility and cost-analysis matrixes and utilizing resources through productivity monitoring will get you on the right path toward selecting concepts to use. This article reviews the above procedures as they applied to one hospital's decision-making process to determine whether to incorporate a stockless inventory program. Through an analysis of customer demand, the hospital realized that stockless was the way to go, but not by outsourcing the function--the hospital incorporated an in-house stockless inventory program.
Lima, Fabiano F; Camillo, Carlos A; Gobbo, Luis A; Trevisan, Iara B; Nascimento, Wesley B B M; Silva, Bruna S A; Lima, Manoel C S; Ramos, Dionei; Ramos, Ercy M C
2018-03-01
The objectives of the study were to compare the effects of resistance training using either a low cost and portable elastic tubing or conventional weight machines on muscle force, functional exercise capacity, and health-related quality of life (HRQOL) in middle-aged to older healthy adults. In this clinical trial twenty-nine middle-aged to older healthy adults were randomly assigned to one of the three groups a priori defined: resistance training with elastic tubing (ETG; n = 10), conventional resistance training (weight machines) (CTG; n = 9) and control group (CG, n = 10). Both ETG and CTG followed a 12-week resistance training (3x/week - upper and lower limbs). Muscle force, functional exercise capacity and HRQOL were evaluated at baseline, 6 and 12 weeks. CG underwent the three evaluations with no formal intervention or activity counseling provided. ETG and CTG increased similarly and significantly muscle force (Δ16-44% in ETG and Δ25-46% in CTG, p < 0.05 for both), functional exercise capacity (ETG Δ4 ± 4% and CTG Δ6±8%; p < 0.05 for both). Improvement on "pain" domain of HRQOL could only be observed in the CTG (Δ21 ± 26% p = 0.037). CG showed no statistical improvement in any of the variables investigated. Resistance training using elastic tubing (a low cost and portable tool) and conventional resistance training using weight machines promoted similar positive effects on peripheral muscle force and functional exercise capacity in middle-aged to older healthy adults.
McIntyre, Lynn; Kwok, Cynthia; Emery, J C Herbert; Dutton, Daniel J
2016-08-15
Although there is widespread recognition that poverty is a key determinant of health, there has been less research on the impact of poverty reduction on health. Recent calls for a guaranteed annual income (GAI), defined as regular income provided to citizens by the state regardless of work status, raise questions about the impact, relative to the costs, of such a population health intervention. The objective of this study was to determine the impact of Canadian seniors' benefits (Old Age Security/Guaranteed Income Supplement, analogous to a GAI program) on the self-reported health, self-reported mental health and functional health of age-eligible, low-income seniors. We used the 2009-2010 Canadian Community Health Survey to examine unattached adult respondents with an annual income of $20,000 or less, stratified by seniors' benefits/GAI eligibility (55-64 years: ineligible; 65-74 years: eligible). Using regression, we assessed self-reported health, selfreported mental health and functional health as measured by the Health Utilities Index, as outcomes for seniors' benefits/GAI-eligible and -ineligible groups. We found that individuals age-eligible for seniors' benefits/GAI had better health outcomes than recipients of conditional income assistance programs. Eligibility for seniors' benefits/GAI after age 64 was associated with better self-reported health, functional health and self-reported mental health outcomes, and these effects were observed until age 74. Using seniors' benefits as an example, a GAI leads to significantly better mental health and improved health overall. These improvements are likely to yield reduced health care costs, which may offset the costs associated with program expansion.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P
2016-04-01
The principal goal of research relative to disasters is to decrease the risk that a hazard will result in a disaster. Disaster studies pursue two distinct directions: (1) epidemiological (non-interventional); and (2) interventional. Both interventional and non-interventional studies require data/information obtained from assessments of function. Non-interventional studies examine the epidemiology of disasters. Interventional studies evaluate specific interventions/responses in terms of their effectiveness in meeting their respective objectives, their contribution to the overarching goal, other effects created, their respective costs, and the efficiency with which they achieved their objectives. The results of interventional studies should contribute to evidence that will be used to inform the decisions used to define standards of care and best practices for a given setting based on these standards. Interventional studies are based on the Disaster Logic Model (DLM) and are used to change or maintain levels of function (LOFs). Relief and Recovery interventional studies seek to determine the effects, outcomes, impacts, costs, and value of the intervention provided after the onset of a damaging event. The Relief/Recovery Framework provides the structure needed to systematically study the processes involved in providing relief or recovery interventions that result in a new LOF for a given Societal System and/or its component functions. It consists of the following transformational processes (steps): (1) identification of the functional state prior to the onset of the event (pre-event); (2) assessments of the current functional state; (3) comparison of the current functional state with the pre-event state and with the results of the last assessment; (4) needs identification; (5) strategic planning, including establishing the overall strategic goal(s), objectives, and priorities for interventions; (6) identification of options for interventions; (7) selection of the most appropriate intervention(s); (8) operational planning; (9) implementation of the intervention(s); (10) assessments of the effects and changes in LOFs resulting from the intervention(s); (11) determination of the costs of providing the intervention; (12) determination of the current functional status; (13) synthesis of the findings with current evidence to define the benefits and value of the intervention to the affected population; and (14) codification of the findings into new evidence. Each of these steps in the Framework is a production function that facilitates evaluation, and the outputs of the transformation process establish the current state for the next step in the process. The evidence obtained is integrated into augmenting the respective Response Capacities of a community-at-risk. The ultimate impact of enhanced Response Capacity is determined by studying the epidemiology of the next event.
26 CFR 1.401(l)-3 - Permitted disparity for defined benefit plans.
Code of Federal Regulations, 2013 CFR
2013-04-01
... requirement of this paragraph (b). Thus, for example, if the form of a defined benefit plan's normal... the optional form (the “offset amount”). (D) Post-retirement cost-of-living adjustments—(1) In general... merely because it provides an automatic post-retirement cost-of-living adjustment that satisfies...
26 CFR 1.401(l)-3 - Permitted disparity for defined benefit plans.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of this paragraph (b). Thus, for example, if the form of a defined benefit plan's normal retirement... the optional form (the “offset amount”). (D) Post-retirement cost-of-living adjustments—(1) In general... merely because it provides an automatic post-retirement cost-of-living adjustment that satisfies...
Productivity costs in patients with refractory chronic rhinosinusitis.
Rudmik, Luke; Smith, Timothy L; Schlosser, Rodney J; Hwang, Peter H; Mace, Jess C; Soler, Zachary M
2014-09-01
Disease-specific reductions in patient productivity can lead to substantial economic losses to society. The purpose of this study was to: 1) define the annual productivity cost for a patient with refractory chronic rhinosinusitis (CRS) and 2) evaluate the relationship between degree of productivity cost and CRS-specific characteristics. Prospective, multi-institutional, observational cohort study. The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time was quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 U.S. National Census and the 2013 U.S. Department of Labor statistics. A total of 55 patients with refractory CRS were enrolled. The mean work days lost related to absenteeism and presenteeism were 24.6 and 38.8 days per year, respectively. A total of 21.2 household days were lost per year related to daily sinus care requirements. The overall annual productivity cost was $10,077.07 per patient with refractory CRS. Productivity costs increased with worsening disease-specific QoL (r = 0.440; p = 0.001). Results from this study have demonstrated that the annual productivity cost associated with refractory CRS is $10,077.07 per patient. This substantial cost to society provides a strong incentive to optimize current treatment protocols and continue evaluating novel clinical interventions to reduce this cost. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Productivity Costs in Patients with Refractory Chronic Rhinosinusitis
Rudmik, Luke; Smith, Timothy L.; Schlosser, Rodney J.; Hwang, Peter H.; Mace, Jess C.; Soler, Zachary M.
2014-01-01
Objective Disease-specific reductions in patient productivity can lead to substantial economic losses to society. The purpose of this study was to: 1) define the annual productivity cost for a patient with refractory chronic rhinosinusitis (CRS) and 2) evaluate the relationship between degree of productivity cost and CRS-specific characteristics. Study Design Prospective, multi-institutional, observational cohort study. Methods The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time was quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 US National Census and the 2013 US Department of Labor statistics. Results A total of 55 patients with refractory CRS were enrolled. The mean work days lost related to absenteeism and presenteeism was 24.6 and 38.8 days per year, respectively. A total of 21.2 household days were lost per year related to daily sinus care requirements. The overall annual productivity cost was $10,077.07 per patient with refractory CRS. Productivity costs increased with worsening disease-specific QoL (r=0.440; p=0.001). Conclusion Results from this study have demonstrated that the annual productivity cost associated with refractory CRS is $10,077.07 per patient. This substantial cost to society provides a strong incentive to optimize current treatment protocols and continue evaluating novel clinical interventions to reduce this cost. PMID:24619604
NASA Astrophysics Data System (ADS)
Daniell, James; Wenzel, Friedemann
2014-05-01
Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of macroseismic intensity, capital stock estimate, GDP estimate, year and the combined seismic building index (a created combination of the global seismic code index, building practice factor, building age and infrastructure vulnerability). The analysis provided three key results: a) The production of economic fragility functions from the 1900-2008 events showed very good correlation to the economic loss and cost from earthquakes from 2009-2013, in real-time. This methodology has been extended to other natural disaster types (typhoon, flood, drought). b) The reanalysis of historical earthquake events in order to check associated historical loss and costs versus the expected exposure in terms of intensities. The 1939 Chillan, 1948 Turkmenistan, 1950 Iran, 1972 Managua, 1980 Western Nepal and 1992 Erzincan earthquake events were seen as huge outliers compared with the modelled capital stock and GDP and thus additional studies were undertaken to check the original loss results. c) A worldwide GIS layer database of capital stock (gross and net), GDP, infrastructure age and economic indices over the period 1900-2013 have been created in conjunction with the CATDAT database in order to define correct economic loss and costs.
Multi-criteria analysis for PM10 planning
NASA Astrophysics Data System (ADS)
Pisoni, Enrico; Carnevale, Claudio; Volta, Marialuisa
To implement sound air quality policies, Regulatory Agencies require tools to evaluate outcomes and costs associated to different emission reduction strategies. These tools are even more useful when considering atmospheric PM10 concentrations due to the complex nonlinear processes that affect production and accumulation of the secondary fraction of this pollutant. The approaches presented in the literature (Integrated Assessment Modeling) are mainly cost-benefit and cost-effective analysis. In this work, the formulation of a multi-objective problem to control particulate matter is proposed. The methodology defines: (a) the control objectives (the air quality indicator and the emission reduction cost functions); (b) the decision variables (precursor emission reductions); (c) the problem constraints (maximum feasible technology reductions). The cause-effect relations between air quality indicators and decision variables are identified tuning nonlinear source-receptor models. The multi-objective problem solution provides to the decision maker a set of not-dominated scenarios representing the efficient trade-off between the air quality benefit and the internal costs (emission reduction technology costs). The methodology has been implemented for Northern Italy, often affected by high long-term exposure to PM10. The source-receptor models used in the multi-objective analysis are identified processing long-term simulations of GAMES multiphase modeling system, performed in the framework of CAFE-Citydelta project.
National Stormwater Calculator: Low Impact Development ...
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm
NASA Astrophysics Data System (ADS)
Santello, Marco
2015-03-01
The concept of synergy, denoting the coordination of multiple elements working together toward a common goal, has been extensively studied to understand how the central nervous system (CNS) controls movement (for review see [5,9]). Although this definition is appealing in its simplicity, 'multiple elements', 'working together', and 'common goal' each take different meanings depending on the scale at which a given sensorimotor system is studied, whether the 'working together' is defined in spatial and/or temporal domains, and the hypothesized synergy's 'common goal'. For example, the elements involved in a synergy can be defined as single motor units, muscles, or joints. Similarly, the goal of a synergy may be defined as a means available to the CNS to 'simplify' the control of multiple elements, or to minimize a given cost function or movement feature - all of which may differ across tasks and tasks conditions. These considerations underscore the fact that a universally accepted definition of synergies and their functional role remains to be established (for review see [6]). Thus, the nature and functional role(s) of synergies is still debated in the literature. Nevertheless, it is generally agreed that the reduction in the number of independent degrees of freedom that is manifested through synergies emerges from the interaction of biomechanical and neural factors constraining the spatial and temporal coordination of multiple muscles.
Morphology Development in Solution-Processed Functional Organic Blend Films: An In Situ Viewpoint.
Richter, Lee J; DeLongchamp, Dean M; Amassian, Aram
2017-05-10
Solution-processed organic films are a facile route to high-speed, low cost, large-area deposition of electrically functional components (transistors, solar cells, emitters, etc.) that can enable a diversity of emerging technologies, from Industry 4.0, to the Internet of things, to point-of-use heath care and elder care. The extreme sensitivity of the functional performance of organic films to structure and the general nonequilibrium nature of solution drying result in extreme processing-performance correlations. In this Review, we highlight insights into the fundamentals of solution-based film deposition afforded by recent state-of-the-art in situ measurements of functional film drying. Emphasis is placed on multimodal studies that combine surface-sensitive X-ray scattering (GIWAXS or GISAXS) with optical characterization to clearly define the evolution of solute structure (aggregation, crystallinity, and morphology) with film thickness.
Fink, Per; Ørnbøl, Eva; Christensen, Kaj Sparle
2010-03-24
Hypochondriasis is prevalent in primary care, but the diagnosis is hampered by its stigmatizing label and lack of valid diagnostic criteria. Recently, new empirically established criteria for Health anxiety were introduced. Little is known about Health anxiety's impact on longitudinal outcome, and this study aimed to examine impact on self-rated health and health care costs. 1785 consecutive primary care patients aged 18-65 consulting their family physicians (FPs) for a new illness were followed-up for two years. A stratified subsample of 701 patients was assessed by the Schedules for Clinical Assessment in Neuropsychiatry interview. Patients with mild (N = 21) and severe Health anxiety (N = 81) and Hypochondriasis according to the DSM-IV (N = 59) were compared with a comparison group of patients who had a well-defined medical condition according to their FPs and a low score on the screening questionnaire (N = 968). Self-rated health was measured by questionnaire at index and at three, 12, and 24 months, and health care use was extracted from patient registers. Compared with the 968 patients with well-defined medical conditions, the 81 severe Health anxiety patients and the 59 DSM-IV Hypochondriasis patients continued during follow-up to manifest significantly more Health anxiety (Whiteley-7 scale). They also continued to have significantly worse self-rated functioning related to physical and mental health (component scores of the SF-36). The severe Health anxiety patients used about 41-78% more health care per year in total, both during the 3 years preceding inclusion and during follow-up, whereas the DSM-IV Hypochondriasis patients did not have statistically significantly higher total use. A poor outcome of Health anxiety was not explained by comorbid depression, anxiety disorder or well-defined medical condition. Patients with mild Health anxiety did not have a worse outcome on physical health and incurred significantly less health care costs than the group of patients with a well-defined medical condition. Severe Health anxiety was found to be a disturbing and persistent condition. It is costly for the health care system and must be taken seriously, i.e. diagnosed and treated. This study supports the validity of recently introduced new criteria for Health anxiety.
Fink, Per; Ørnbøl, Eva; Christensen, Kaj Sparle
2010-01-01
Background Hypochondriasis is prevalent in primary care, but the diagnosis is hampered by its stigmatizing label and lack of valid diagnostic criteria. Recently, new empirically established criteria for Health anxiety were introduced. Little is known about Health anxiety's impact on longitudinal outcome, and this study aimed to examine impact on self-rated health and health care costs. Methodology/Principal Findings 1785 consecutive primary care patients aged 18–65 consulting their family physicians (FPs) for a new illness were followed-up for two years. A stratified subsample of 701 patients was assessed by the Schedules for Clinical Assessment in Neuropsychiatry interview. Patients with mild (N = 21) and severe Health anxiety (N = 81) and Hypochondriasis according to the DSM-IV (N = 59) were compared with a comparison group of patients who had a well-defined medical condition according to their FPs and a low score on the screening questionnaire (N = 968). Self-rated health was measured by questionnaire at index and at three, 12, and 24 months, and health care use was extracted from patient registers. Compared with the 968 patients with well-defined medical conditions, the 81 severe Health anxiety patients and the 59 DSM-IV Hypochondriasis patients continued during follow-up to manifest significantly more Health anxiety (Whiteley-7 scale). They also continued to have significantly worse self-rated functioning related to physical and mental health (component scores of the SF-36). The severe Health anxiety patients used about 41–78% more health care per year in total, both during the 3 years preceding inclusion and during follow-up, whereas the DSM-IV Hypochondriasis patients did not have statistically significantly higher total use. A poor outcome of Health anxiety was not explained by comorbid depression, anxiety disorder or well-defined medical condition. Patients with mild Health anxiety did not have a worse outcome on physical health and incurred significantly less health care costs than the group of patients with a well-defined medical condition. Conclusions/Significance Severe Health anxiety was found to be a disturbing and persistent condition. It is costly for the health care system and must be taken seriously, i.e. diagnosed and treated. This study supports the validity of recently introduced new criteria for Health anxiety. PMID:20352043
Ramos, Mafalda; Haughney, John; Henry, Nathaniel; Lindner, Leandro; Lamotte, Mark
2016-01-01
Purpose Aclidinium–formoterol 400/12 µg is a long-acting muscarinic antagonist (LAMA) and a long-acting β2-agonist in a fixed-dose combination used in the management of patients with COPD. This study aimed to assess the cost-effectiveness of aclidinium–formoterol 400/12 µg against the long-acting muscarinic antagonist aclidinium bromide 400 µg. Materials and methods A five-health-state Markov transition model with monthly cycles was developed using MS Excel to simulate patients with moderate-to-severe COPD and their initial lung-function improvement following treatment with aclidinium–formoterol 400/12 µg or aclidinium 400 µg. Health states were based on severity levels defined by Global Initiative for Chronic Obstructive Lung Disease 2010 criteria. The analysis was a head-to-head comparison without step-up therapy, from the NHS Scotland perspective, over a 5-year time horizon. Clinical data on initial lung-function improvement were provided by a pooled analysis of the ACLIFORM and AUGMENT trials. Management, event costs, and utilities were health state-specific. Costs and effects were discounted at an annual rate of 3.5%. The outcome of the analysis was expressed as cost (UK£) per quality-adjusted life-year (QALY) gained. The analysis included one way and probabilistic sensitivity analyses to investigate the impact of parameter uncertainty on model outputs. Results Aclidinium–formoterol 400/12 µg provided marginally higher costs (£41) and more QALYs (0.014), resulting in an incremental cost-effectiveness ratio of £2,976/QALY. Sensitivity analyses indicated that results were robust to key parameter variations, and the main drivers were: mean baseline forced expiratory volume in 1 second (FEV1), risk of exacerbation, FEV1 improvement from aclidinium–formoterol 400/12 µg, and lung-function decline. The probability of aclidinium–formoterol 400/12 µg being cost-effective (using a willingness-to-pay threshold of £20,000/QALY) versus aclidinium 400 µg was 79%. Conclusion In Scotland, aclidinium–formoterol 400/12 µg can be considered a cost-effective treatment option compared to aclidinium 400 µg alone in patients with moderate-to-severe COPD. PMID:27672337
Primary Care Patients' Preference for Hospitals over Clinics in Korea.
Kim, Agnus M; Cho, Seongcheol; Kim, Hyun Joo; Jung, Hyemin; Jo, Min-Woo; Lee, Jin Yong; Eun, Sang Jun
2018-05-30
Korea is in a unique condition to observe whether patients, when equal access to the levels of health care facilities is guaranteed by the support of the national health insurance, choose the appropriate levels of health care facilities. This study was performed to investigate the primary care patients' preference for hospitals over clinics under no restriction for their choice. We used the 2011 National Inpatient Sample database of the Health Insurance Review and Assessment Service in Korea. A primary care patient was defined as a patient who visited as an outpatient in health care facilities with one of the 52 minor conditions defined by the Korean government. We found that approximately 15% of outpatient visits of the patients who were eligible for primary care in Korea happened in hospitals. In terms of cost, the outpatient visits in hospitals accounted for about 29% of total cost of outpatient visits. This arbitrary access to hospitals can lead to an inefficient use of health care resources. In order to ensure that health care facilities are stratified in terms of access as well as size and function, interventions to distribute patients to the appropriate level of care are required.
Distributed Method to Optimal Profile Descent
NASA Astrophysics Data System (ADS)
Kim, Geun I.
Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.
Comparing the effects on work performance of mental and physical disorders.
de Graaf, Ron; Tuithof, Marlous; van Dorsselaer, Saskia; ten Have, Margreet
2012-11-01
To estimate work loss days due to absenteeism and presenteeism associated with commonly occurring mental and physical disorders. In a nationally representative face-to-face survey (Netherlands Mental Health Survey and Incidence Study-2) including 4,715 workers, the presence of 13 mental and 10 chronic physical disorders was assessed using the Composite International Diagnostic Interview 3.0 and a physical disorder checklist. Questions about absent days due to illness and days of reduced quantitative and qualitative functioning while at work were based on the WHO Disability Assessment Schedule. Total work loss days were defined as the sum of the days of these three types of loss, where days of reduced functioning were counted as half. Both individual and population-level effects of disorders on work loss were studied, taking comorbidity into account. Any mental disorder was associated with 10.5 additional absent days, 8.0 days of reduced-qualitative functioning and 12.0 total work loss days. For any physical disorder, the number of days was 10.7, 3.5 and 11.3, respectively. Adjusted for comorbidity, drug abuse, bipolar disorder, major depression, digestive disorders and panic disorder were associated with the highest number of additional total work loss days. At population-level, major depression, chronic back pain, respiratory disorders, drug abuse and digestive disorders contributed the most. Annual total work loss costs per million workers were estimated at
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Backward assembly planning with DFA analysis
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1995-01-01
An assembly planning system that operates based on a recursive decomposition of assembly into subassemblies, and analyzes assembly cost in terms of stability, directionality, and manipulability to guide the generation of preferred assembly plans is presented. The planning in this system incorporates the special processes, such as cleaning, testing, labeling, etc. that must occur during the assembly, and handles nonreversible as well as reversible assembly tasks through backward assembly planning. In order to increase the planning efficiency, the system avoids the analysis of decompositions that do not correspond to feasible assembly tasks. This is achieved by grouping and merging those parts that can not be decomposable at the current stage of backward assembly planning due to the requirement of special processes and the constraint of interconnection feasibility. The invention includes methods of evaluating assembly cost in terms of the number of fixtures (or holding devices) and reorientations required for assembly, through the analysis of stability, directionality, and manipulability. All these factors are used in defining cost and heuristic functions for an AO* search for an optimal plan.
Correlates of Worry About Health Care Costs Among Older Adults.
Choi, Namkee G; DiNitto, Diana M
2018-06-01
Although older adults in the United States incur more health care expenses than younger adults, little research has been done on their worry about health care costs. Using data from the 2013 National Health Interview Survey ( n = 7,253 for those 65+ years), we examined factors associated with older adults' health care cost worries, defined as at least a moderate level of worry, about ability to pay for normal health care and/or for health care due to a serious illness or accident. Bivariate analyses were used to compare worriers and nonworriers. Binary logistic regression analysis was used to examine the association of income, health status, health care service use, and insurance type with worry status. Older age and having Medicaid and Veterans Affairs (VA)/military health benefits were associated with lower odds of worry, while low income, chronic pain, functional limitations, psychological distress, and emergency department visits were associated with higher odds. Practice and policy implications for the findings are discussed.
Use of the Collaborative Optimization Architecture for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Moore, A. A.; Kroo, I. M.
1996-01-01
Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization
Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications
NASA Technical Reports Server (NTRS)
Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.
2008-01-01
Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.
An algorithm for control system design via parameter optimization. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sinha, P. K.
1972-01-01
An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.
The Westinghouse Series 1000 Mobile Phone: Technology and applications
NASA Technical Reports Server (NTRS)
Connelly, Brian
1993-01-01
Mobile satellite communications will be popularized by the North American Mobile Satellite (MSAT) system. The success of the overall system is dependent upon the quality of the mobile units. Westinghouse is designing our unit, the Series 1000 Mobile Phone, with the user in mind. The architecture and technology aim at providing optimum performance at a low per unit cost. The features and functions of the Series 1000 Mobile Phone have been defined by potential MSAT users. The latter portion of this paper deals with who those users may be.
NASA Technical Reports Server (NTRS)
Zapata, Edgar
2017-01-01
This review brings rigorous life cycle cost (LCC) analysis into discussions about COTS program costs. We gather publicly available cost data, review the data for credibility, check for consistency among sources, and rigorously define and analyze specific cost metrics.
Cost Accounting and Accountability for Early Education Programs for Handicapped Children.
ERIC Educational Resources Information Center
Gingold, William
The paper offers some basic information for making decisions about allocating and accounting for resources provided to young handicapped children. Sections address the following topics: reasons for costing, audiences for cost accounting and accountability information, and a process for cost accounting and accountability (defining cost categories,…
Minimizing communication cost among distributed controllers in software defined networks
NASA Astrophysics Data System (ADS)
Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed
2016-08-01
Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.
Feasibility Study of Solar Dome Encapsulation of Photovoltaic Arrays
NASA Technical Reports Server (NTRS)
1978-01-01
The technical and economic advantages of using air-supported plastic enclosures to protect flat plate photovoltaic arrays are described. Conceptual designs for a fixed, latitude-tilt array and a fully tracking array were defined. Detailed wind loads and strength analyses were performed for the fixed array. Detailed thermal and power output analyses provided array performance for typical seasonal and extreme temperature conditions. Costs of each design as used in a 200 MWe central power station were defined from manufacturing and material cost estimates. The capital cost and cost of energy for the enclosed fixed-tilt array were lower than for the enclosed tracking array. The enclosed fixed-tilt array capital investment was 38% less, and the levelized bus bar energy cost was 26% less than costs for a conventional, glass-encapsulated array design. The predicted energy cost for the enclosed fixed array was 79 mills/kW-h for direct current delivered to the power conditioning units.
NASA Astrophysics Data System (ADS)
Mortensen, Henrik Lund; Sørensen, Jens Jakob W. H.; Mølmer, Klaus; Sherson, Jacob Friis
2018-02-01
We propose an efficient strategy to find optimal control functions for state-to-state quantum control problems. Our procedure first chooses an input state trajectory, that can realize the desired transformation by adiabatic variation of the system Hamiltonian. The shortcut-to-adiabaticity formalism then provides a control Hamiltonian that realizes the reference trajectory exactly but on a finite time scale. As the final state is achieved with certainty, we define a cost functional that incorporates the resource requirements and a perturbative expression for robustness. We optimize this functional by systematically varying the reference trajectory. We demonstrate the method by application to population transfer in a laser driven three-level Λ-system, where we find solutions that are fast and robust against perturbations while maintaining a low peak laser power.
Optimization of EB plant by constraint control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummel, H.K.; de Wit, G.B.C.; Maarleveld, A.
1991-03-01
Optimum plant operation can often be achieved by means of constraint control instead of model- based on-line optimization. This is because optimum operation is seldom at the top of the hill but usually at the intersection of constraints. This article describes the development of a constraint control system for a plant producing ethylbenzene (EB) by the Mobil/Badger Ethylbenzene Process. Plant optimization can be defined as the maximization of a profit function describing the economics of the plant. This function contains terms with product values, feedstock prices and operational costs. Maximization of the profit function can be obtained by varying relevantmore » degrees of freedom in the plant, such as a column operating pressure or a reactor temperature. These degrees of freedom can be varied within the available operating margins of the plant.« less
SCM: A method to improve network service layout efficiency with network evolution
Zhao, Qi; Zhang, Chuanhao
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299
Implement the medical group revenue function. Create competitive advantage.
Colucci, C
1998-01-01
This article shows medical groups how they can employ new financial management and information technology techniques to safeguard their revenue and income streams. These managerial techniques stem from the application of the medical group revenue function, which is defined herein. This article also describes how the medical group revenue function can be used to create value by employing a database and a decision support system. Finally, the article describes how the decision support system can be used to create competitive advantage. Through the wise use of internally generated information, medical groups can negotiate better contract terms, improve their operations, cut their costs, embark on capital investment programs and improve market share. As medical groups gain market power by improving in these areas, they will be more attractive to potential strategic allies, payers and investment bankers.
Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas
NASA Technical Reports Server (NTRS)
Young, D. T.
1993-01-01
The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.
Advanced Propulsion System Studies for General Aviation Aircraft
NASA Technical Reports Server (NTRS)
Eisenberg, Joseph D. (Technical Monitor); German, Jon
2003-01-01
This final report addresses the following topics: Market Impact Analysis (1) assessment of general aviation, including commuter/regional, aircraft market impact due to incorporation of advanced technology propulsion system on acquisition and operating costs, job creation and/or manpower demand, and future fleet size; (2) selecting an aircraft and engine for the study by focusing on the next generation 19-passenger commuter and the Williams International FJ44 turbofan engine growth. Propulsion System Analysis Conducted mission analysis studies and engine cycle analysis to define a new commuter mission and required engine performance, define acquisition and operating costs and, select engine configuration and initiated preliminary design for hardware modifications required. Propulsion System Benefits (1) assessed and defined engine emissions improvements, (2) assessed and defined noise reduction potential and, (3) conducted a cost analysis impact study. Review of Relevant NASA Programs Conducted literature searches using NERAC and NASA RECON services for related technology in the emissions and acoustics area. Preliminary Technology Development Plans Defined plan to incorporate technology improvements for an FJ44-2 growth engine in performance, emissions, and noise suppression.
The societal costs of insomnia
Wade, Alan G
2011-01-01
Objective Insomnia can be broadly defined as difficulty initiating or maintaining sleep, or sleep that is not refreshing or of poor quality with negative effect on daytime function. Insomnia can be a primary condition or comorbid to an underlying disorder. Subjective measures of insomnia used in population studies, usually based on complaints of unsatisfactory sleep, put the prevalence at about 10%. Insomnia is more common in the elderly and in women, and is often associated with medical and psychiatric disorders. This review examines the measures used to assess quality of sleep (QOS) and daytime functioning and the impact of insomnia on society using these measures. Methods Literature searches were performed to identify all studies of insomnia (primary and comorbid) in adults (aged 18–64 years) and the elderly (aged ≥ 65 years) with baseline and/or outcomes relating to QOS or daytime functioning. The impact of poor QOS on quality of life (QOL), psychomotor and cognitive skills, health care resource utilization, and other societal effects was examined. Results Although definitions and measurement scales used to assess sleep quality vary widely, it is clear that the societal consequences of insomnia are substantial and include impaired QOL and increased health care utilization. The impact of poor QOS and impaired daytime functioning common in insomnia can lead to indirect effects such as lower work productivity, increased sick leave, and a higher rate of motor vehicle crashes. Conclusions Insomnia is associated with substantial direct and indirect costs to society. It is almost impossible to separate the costs associated with primary and comorbid insomnia. More studies are required which control for the severity of any primary disorder to accurately evaluate the costs of comorbid insomnia. Development of standardized diagnostic and assessment scales will enable more accurate quantification of the true societal burden of insomnia and will contribute to greater understanding of this disorder. PMID:21326650
Cost-Benefit Analysis: Applicability in Higher Education.
ERIC Educational Resources Information Center
Dunn, Briggs P.; Sullins, W. Robert
1982-01-01
Discusses problems in applying cost-benefit analysis to higher education, including selecting the correct productivity index, determining the discount rate for social consumption foregone, measuring individual and social costs and benefits, and defining the time horizon for educational investment returns. Contrasts cost-benefit and…
48 CFR 9904.412-40 - Fundamental requirement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... in current and future cost accounting periods. (b) Measurement of pension cost. (1) For defined.... 9904.412-40 Section 9904.412-40 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.412-40 Fundamental requirement. (a) Components of pension...
Lee, Hyo Jung; Jang, Sung-In; Park, Eun-Cheol
2017-02-20
The Korean healthcare system is composed of costly and inefficient structures that fail to adequately divide the functions and roles of medical care organizations. To resolve this matter, the government reformed the cost-sharing policy in November of 2011 for the management of outpatients visiting general or tertiary hospitals with comparatively mild diseases. The purpose of the present study was to examine the impact of increasing the coinsurance rate of prescription drug costs for 52 mild diseases at general or tertiary hospitals on outpatient healthcare service utilization. The present study used health insurance claim data collected from 2010 to 2013. The study population consisted of 505,691 outpatients and was defined as those aged 20-64 years who had visited medical care organizations for the treatment of 52 diseases both before and after the program began. To examine the effect of the cost-sharing policy on outpatient healthcare service utilization (percentage of general or tertiary hospital utilization, number of outpatient visits, and outpatient medical costs), a segmented regression analysis was performed. After the policy to increase the coinsurance rate on prescription drug costs was implemented, the number of outpatient visits at general or tertiary hospitals decreased (β = -0.0114, p < 0.0001); however, the number increased at hospitals and clinics (β = 0.0580, p < 0.0001). Eventually, the number of outpatient visits to hospitals and clinics began to decrease after policy initiation (β = -0.0018, p < 0.0001). Outpatient medical costs decreased for both medical care organizations (general or tertiary hospitals: β = -2913.4, P < 0.0001; hospitals or clinics: β = -591.35, p < 0.0001), and this decreasing trend continued with time. It is not clear that decreased utilization of general or tertiary hospitals has transferred to that of clinics or hospitals due to the increased cost-sharing policy of prescription drug costs. This result indicates the cost-sharing policy, intended to change patient behaviors for healthcare service utilization, has had limited effects on rebuilding the healthcare system and the function of medical care organizations.
Mon, Marta; Rivero-Crespo, Miguel A; Ferrando-Soria, Jesús; Vidal-Moya, Alejandro; Boronat, Mercedes; Leyva-Pérez, Antonio; Corma, Avelino; Hernández-Garrido, Juan C; López-Haro, Miguel; Calvino, José J; Ragazzon, Giulio; Credi, Alberto; Armentano, Donatella; Pardo, Emilio
2018-05-22
The gram-scale synthesis, stabilization, and characterization of well-defined ultrasmall subnanometric catalytic clusters on solids is a challenge. The chemical synthesis and X-ray snapshots of Pt 0 2 clusters, homogenously distributed and densely packaged within the channels of a metal-organic framework, is presented. This hybrid material catalyzes efficiently, and even more importantly from an economic and environmental viewpoint, at low temperature (25 to 140 °C), energetically costly industrial reactions in the gas phase such as HCN production, CO 2 methanation, and alkene hydrogenations. These results open the way for the design of precisely defined catalytically active ultrasmall metal clusters in solids for technically easier, cheaper, and dramatically less-dangerous industrial reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Superpixel Cut for Figure-Ground Image Segmentation
NASA Astrophysics Data System (ADS)
Yang, Michael Ying; Rosenhahn, Bodo
2016-06-01
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
Software Defined Radios - Architectures, Systems and Functions
NASA Technical Reports Server (NTRS)
Sims, William H.
2017-01-01
Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes.
Software Defined Radios - Architectures, Systems and Functions
NASA Technical Reports Server (NTRS)
Sims, Herb
2017-01-01
Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes
Bischoff, Adrianne R; Pokhvisneva, Irina; Léger, Étienne; Gaudreau, Hélène; Steiner, Meir; Kennedy, James L; O'Donnell, Kieran J; Diorio, Josie; Meaney, Michael J; Silveira, Patrícia P
2017-01-01
Fetal adversity, evidenced by poor fetal growth for instance, is associated with increased risk for several diseases later in life. Classical cut-offs to characterize small (SGA) and large for gestational age (LGA) newborns are used to define long term vulnerability. We aimed at exploring the possible dynamism of different birth weight cut-offs in defining vulnerability in developmental outcomes (through the Bayley Scales of Infant and Toddler Development), using the example of a gene vs. fetal adversity interaction considering gene choices based on functional relevance to the studied outcome. 36-month-old children from an established prospective birth cohort (Maternal Adversity, Vulnerability, and Neurodevelopment) were classified according to birth weight ratio (BWR) (SGA ≤0.85, LGA >1.15, exploring a wide range of other cut-offs) and genotyped for polymorphisms associated with dopamine signaling (TaqIA-A1 allele, DRD2-141C Ins/Ins, DRD4 7-repeat, DAT1-10- repeat, Met/Met-COMT), composing a score based on the described function, in which hypofunctional variants received lower scores. There were 251 children (123 girls and 128 boys). Using the classic cut-offs (0.85 and 1.15), there were no statistically significant interactions between the neonatal groups and the dopamine genetic score. However, when changing the cut-offs, it is possible to see ranges of BWR that could be associated with vulnerability to poorer development according to the variation in the dopamine function. The classic birth weight cut-offs to define SGA and LGA newborns should be seen with caution, as depending on the outcome in question, the protocols for long-term follow up could be either too inclusive-therefore most costly, or unable to screen true vulnerabilities-and therefore ineffective to establish early interventions and primary prevention.
NASA Astrophysics Data System (ADS)
Girard, C.; Rinaudo, J. D.; Caballero, Y.; Pulido-Velazquez, M.
2012-04-01
This article presents a case study which illustrates how an integrated hydro-economic model can be applied to optimize a program of measures (PoM) at the river basin level. By allowing the integration of hydrological, environmental and economic aspects at a local scale, this model is indeed useful to assist water policy decision making processes. The model identifies the least cost PoM to satisfy the predicted 2030 urban and agricultural water demands while meeting the in-stream flow constraints. The PoM mainly consists of water saving and conservation measures at the different demands. It includes as well some measures mobilizing additional water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The flow constraints are defined to ensure a good status of the surface water bodies, as defined by the EU Water Framework Directive (WFD). The case study is conducted in the Orb river basin, a coastal basin in Southern France. It faces a significant population growth, changes in agricultural patterns and limited water resources. It is classified at risk of not meeting the good status by 2015. Urban demand is calculated by type of water users at municipality level in 2006 and projected to 2030 with user specific scenarios. Agricultural water demand is estimated at irrigation district (canton) level in 2000 and projected to 2030 under three agricultural development scenarios. The total annual cost of each measure has been calculated taken into account operation and maintenance costs as well as investment cost. A first optimization model was developed using GAMS, General Algebraic Modeling System, applying Mixed Integer Linear Programming. The optimization is run to select the set of measures that minimizes the objective function, defined as the total cost of the applied measures, while meeting the demands and environmental constraints (minimum in-stream flows) for the 2030 time horizon. The first result is an optimized PoM on a drought year with a return period of five years, taken as a baseline scenario. A second step takes into account the impact of climate change on water demands and available resources. This allows decision makers to assess how the cost of the PoM evolves when the level of environmental constraints is increased or loosed, and so provides them a valuable input to understand the opportunity costs and trade-offs when defining environmental objectives for the long term, including also climate as a major factor of change. Finally, the model will be used on an extended hydrological time series to study costs and impacts of the PoM on the allocation of water resources. This will also allow the investigation of the uncertainties and the effect of risk aversion of decision makers and users on the system management, as well as the influence of the perfect foresight of deterministic optimization. ACKNOWLEDGEMENTS The study has been partially supported by the BRGM project Ouest-Hérault, the European Community 7th Framework Project GENESIS (n. 226536) on groundwater systems, and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (sub-projects CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
The Trade-Off between Costs and Outcomes: The Case of Acute Myocardial Infarction
Schreyögg, Jonas; Stargardt, Tom
2010-01-01
Objective To investigate and to quantify the relationship between hospital costs and health outcomes for patients with acute myocardial infarction (AMI) in Veterans Health Administration (VHA) hospitals using individual-level data for costs and outcomes. Data Sources VHA administrative files for the fiscal years 2000–2006. Study Design Costs were defined as costs incurred during the index hospitalization for treatment of AMI. Mortality and readmission, assessed 1 year after the index hospitalization, were used as measures of clinical outcome. We examined health outcomes as a function of costs and other patient-level and hospital-level characteristics using a two-stage Cox proportional hazard model that accounted for competing risks within a multilevel framework. To control for patient comorbidities, we compiled a comprehensive list of comorbidities that have been found in other studies to affect mortality and readmissions. Principal Findings We found that costs were negatively associated with mortality and readmissions. Every U.S.$100 less spent is associated with a 0.63 percent increase in the hazard of dying and a 1.24 percent increase in the hazard to be readmitted conditional on not dying. This main finding remained unchanged after a number of sensitivity checks. Conclusions Our results suggest that there is a trade-off between costs and outcomes. The negative association between costs and mortality suggests that outcomes should be monitored closely when introducing cost-containment programs. Additional studies are needed to examine the cost–outcome relationship for conditions other than AMI to see whether our results are consistent. PMID:20819109
Money for nothing? The net costs of medical training.
Barros, Pedro P; Machado, Sara R
2010-09-01
One of the stages of medical training is the residency programme. Hosting institutions often claim compensation for the training provided. How much should this compensation be? According to our results, given the benefits arising from having residents among the house staff, no transfer (either tuition fee or subsidy) should be set to compensate the hosting institution for providing medical training. This paper quantifies the net costs of medical training, defined as the training costs over and above the wage paid. We jointly consider two effects. On the one hand, residents take extra time and resources from both the hosting institution and the supervisor. On the other hand, residents can be regarded as a less expensive substitute to nurses and/or graduate physicians, in the production of health care, both in primary care centres and hospitals. The net effect can be either positive or negative. We use the fact that residents, in Portugal, are centrally allocated to National Health Service hospitals to treat them as a fixed exogenous production factor. The data used comes from Portuguese hospitals and primary care centres. Cost function estimates point to a small negative marginal impact of residents on hospitals' (-0.02%) and primary care centres' (-0.9%) costs. Nonetheless, there is a positive relation between size and cost to the very large hospitals and primary care centres. Our approach to estimation of residents' costs controls for other teaching activities hospitals might have (namely undergraduate Medical Schools). Overall, the net costs of medical training appear to be quite small.
Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A
2010-12-01
This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.
Defining resilience within a risk-informed assessment framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Garill A.; Unwin, Stephen D.; Holter, Gregory M.
2011-08-01
The concept of resilience is the subject of considerable discussion in academic, business, and governmental circles. The United States Department of Homeland Security for one has emphasised the need to consider resilience in safeguarding critical infrastructure and key resources. The concept of resilience is complex, multidimensional, and defined differently by different stakeholders. The authors contend that there is a benefit in moving from discussing resilience as an abstraction to defining resilience as a measurable characteristic of a system. This paper proposes defining resilience measures using elements of a traditional risk assessment framework to help clarify the concept of resilience andmore » as a way to provide non-traditional risk information. The authors show various, diverse dimensions of resilience can be quantitatively defined in a common risk assessment framework based on the concept of loss of service. This allows the comparison of options for improving the resilience of infrastructure and presents a means to perform cost-benefit analysis. This paper discusses definitions and key aspects of resilience, presents equations for the risk of loss of infrastructure function that incorporate four key aspects of resilience that could prevent or mitigate that loss, describes proposed resilience factor definitions based on those risk impacts, and provides an example that illustrates how resilience factors would be calculated using a hypothetical scenario.« less
A mission operations architecture for the 21st century
NASA Technical Reports Server (NTRS)
Tai, W.; Sweetnam, D.
1996-01-01
An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.
Stabilization for sampled-data neural-network-based control systems.
Zhu, Xun-Lin; Wang, Youyi
2011-02-01
This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.
Guidance, navigation, and control trades for an Electric Orbit Transfer Vehicle
NASA Astrophysics Data System (ADS)
Zondervan, K. P.; Bauer, T. A.; Jenkin, A. B.; Metzler, R. A.; Shieh, R. A.
The USAF Space Division initiated the Electric Insertion Transfer Experiment (ELITE) in the fall of 1988. The ELITE space mission is planned for the mid 1990s and will demonstrate technological readiness for the development of operational solar-powered electric orbit transfer vehicles (EOTVs). To minimize the cost of ground operations, autonomous flight is desirable. Thus, the guidance, navigation, and control (GNC) functions of an EOTV should reside on board. In order to define GNC requirements for ELITE, parametric trades must be performed for an operational solar-powered EOTV so that a clearer understanding of the performance aspects is obtained. Parametric trades for the GNC subsystems have provided insight into the relationship between pointing accuracy, transfer time, and propellant utilization. Additional trades need to be performed, taking into account weight, cost, and degree of autonomy.
Life support system cost study: Addendum to cost analysis of carbon dioxide concentrators
NASA Technical Reports Server (NTRS)
Yakut, M. M.
1973-01-01
New cost data are presented for the Hydrogen-Depolarized Carbon Dioxide Concentrator (HDC), based on modifying the concentrator to delete the quick disconnect valves and filters included in the system model defined in MDC-G4631. System description, cost data and a comparison between CO2 concentrator costs are presented.
Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange
NASA Technical Reports Server (NTRS)
Haynes, Brian D.
2015-01-01
Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange is a two-year research effort to visualize the U. S. aviation industry at a point 50 years in the future, and to define potential communication solutions to meet those future data exchange needs. The research team, led by XCELAR, was tasked with identifying future National Airspace System (NAS) scenarios, determining requirements and functions (including gaps), investigating technical and business issues for air, ground, & air-to-ground interactions, and reporting on the results. The project was conducted under technical direction from NASA and in collaboration with XCELAR's partner, National Institute of Aerospace, and NASA technical representatives. Parallel efforts were initiated to define the information exchange functional needs of the future NAS, and specific communication link technologies to potentially serve those needs. Those efforts converged with the mapping of each identified future NAS function to potential enabling communication solutions; those solutions were then compared with, and ranked relative to, each other on a technical basis in a structured analysis process. The technical solutions emerging from that process were then assessed from a business case perspective to determine their viability from a real-world adoption and deployment standpoint. The results of that analysis produced a proposed set of future solutions and most promising candidate technologies. Gap analyses were conducted at two points in the process, the first examining technical factors, and the second as part of the business case analysis. In each case, no gaps or unmet needs were identified in applying the solutions evaluated to the requirements identified. The future communication solutions identified in the research comprise both specific link technologies and two enabling technologies that apply to most or all specific links. As a result, the research resulted in a new analysis approach, viewing the underlying architecture of ground-air and air-air communications as a whole, rather than as simple "link to function" paired solutions. For the business case analysis, a number of "reference architectures" were developed for both the future technologies and the current systems, based on three typical configurations of current aircraft. Current and future costs were assigned, and various comparisons made between the current and future architectures. In general, it was assumed that if a future architecture offers lower cost than the current typical architecture, while delivering equivalent or better performance, it is likely that the future solution will gain industry acceptance. Conversely, future architectures presenting higher costs than their current counterparts must present a compelling benefit case in other areas or risk a lack of industry acceptance. The business case analysis consistently indicated lower costs for the proposed future architectures, and in most cases, significantly so. The proposed future solutions were found to offer significantly greater functionality, flexibility, and growth potential over time, at lower cost, than current systems. This was true for overall, fleet-wide equipage for domestic and oceanic air carriers, as well as for single, General Aviation (GA) aircraft. The overall research results indicate that all identified requirements can be met by the proposed solutions with significant capacity for future growth. Results also illustrate that the majority of the future communication needs can be met using currently allocated aviation RF spectrum, if used in more effective ways than it is today. A combination of such optimized aviation-specific links and commercial communication systems meets all identified needs for the 50-year future and beyond, with the caveat that a new, overall function will be needed to manage all information exchange, individual links, security, cost, and other factors. This function was labeled "Delivery Manager" (DM) within this research. DM employs a distributed client/server architecture, for both airborne and ground communications architectures. Final research results included identifying the most promising candidate technologies for the future system, conclusions and recommendations, and identifying areas where further research should be considered.
NASA Astrophysics Data System (ADS)
Smedstad, L.; Barron, C. N.; Book, J. W.; Osborne, J. J.; Souopgui, I.; Rice, A. E.; Linzell, R. S.
2017-12-01
The Guidance of Heterogeneous Observation Systems (GHOST) is a tool designed to sample ocean model outputs to determine a suite of possible path options for unmanned platforms. The system is built around a Runge-Kutta method to determine all possible paths, followed by a cost function calculation, an enforcement of safe operating area, and an analysis to determine a top 10% level of cost function and to rank the paths that qualify. A field experiment took place from 16 May until 5 June 2017 aboard the R/V Savannah operating out of the Duke University Marine Laboratory (DUML) in Beaufort, NC. Gliders were deployed in alternating groups with missions defined by one of two possible categories: a station-keeping array and a moving array. Unlike previous versions of the software, which monitored platforms individually, these gliders were placed in groups of 2-5 gliders with the same tasks. Daily runs of the GHOST software were performed for each mission category and for two different 1 km orientations of the Navy Coastal Ocean Model (NCOM). By limiting the number of trial solutions and by sorting through the best results, a quick turnaround was made possible for glider operators to determine waypoints in order to remain in desired areas or to move in paths that sampled areas of highest thermohaline variability. Limiting risk by restricting solutions to defined areas with statistically less likely occurrences of high ocean currents was an important consideration in this study area that was located just inshore of the Gulf Stream.
A priori mesh grading for the numerical calculation of the head-related transfer functions
Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr
2017-01-01
Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186
Gioe, Terence J; Sharma, Amit; Tatman, Penny; Mehle, Susan
2011-01-01
Numerous joint implant options of varying cost are available to the surgeon, but it is unclear whether more costly implants add value in terms of function or longevity. We evaluated registry survival of higher-cost "premium" knee and hip components compared to lower-priced standard components. Premium TKA components were defined as mobile-bearing designs, high-flexion designs, oxidized-zirconium designs, those including moderately crosslinked polyethylene inserts, or some combination. Premium THAs included ceramic-on-ceramic, metal-on-metal, and ceramic-on-highly crosslinked polyethylene designs. We compared 3462 standard TKAs to 2806 premium TKAs and 868 standard THAs to 1311 premium THAs using standard statistical methods. The cost of the premium implants was on average approximately $1000 higher than the standard implants. There was no difference in the cumulative revision rate at 7-8 years between premium and standard TKAs or THAs. In this time frame, premium implants did not demonstrate better survival than standard implants. Revision indications for TKA did not differ, and infection and instability remained contributors. Longer followup is necessary to demonstrate whether premium implants add value in younger patient groups. Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.
Digitizing Dissertations for an Institutional Repository: A Process and Cost Analysis*
Piorun, Mary; Palmer, Lisa A.
2008-01-01
Objective: This paper describes the Lamar Soutter Library's process and costs associated with digitizing 300 doctoral dissertations for a newly implemented institutional repository at the University of Massachusetts Medical School. Methodology: Project tasks included identifying metadata elements, obtaining and tracking permissions, converting the dissertations to an electronic format, and coordinating workflow between library departments. Each dissertation was scanned, reviewed for quality control, enhanced with a table of contents, processed through an optical character recognition function, and added to the institutional repository. Results: Three hundred and twenty dissertations were digitized and added to the repository for a cost of $23,562, or $0.28 per page. Seventy-four percent of the authors who were contacted (n = 282) granted permission to digitize their dissertations. Processing time per title was 170 minutes, for a total processing time of 906 hours. In the first 17 months, full-text dissertations in the collection were downloaded 17,555 times. Conclusion: Locally digitizing dissertations or other scholarly works for inclusion in institutional repositories can be cost effective, especially if small, defined projects are chosen. A successful project serves as an excellent recruitment strategy for the institutional repository and helps libraries build new relationships. Challenges include workflow, cost, policy development, and copyright permissions. PMID:18654648
Fey, Nicholas P; Klute, Glenn K; Neptune, Richard R
2012-11-01
Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care.
Benefit-cost evaluation of ITS projects : benefit-cost summary
DOT National Transportation Integrated Search
2000-10-01
Over 60 individual ITS projects were defined in four "model" cities for purposes of conducting benefit-cost evalations of deployments under the USDOT's Metrolpolitan Model Deployment Initiative (MMDI). The federal government provided funds for deploy...
Social Discounting and the Prisoner’s Dilemma Game
Locey, Matthew L.; Safin, Vasiliy; Rachlin, Howard
2014-01-01
Altruistic behavior has been defined in economic terms as “…costly acts that confer economic benefits on other individuals” (Fehr & Fischbacher, 2003). In a prisoner’s dilemma game, cooperation benefits the group but is costly to the individual (relative to defection), yet a significant number of players choose to cooperate. We propose that people do value rewards to others, albeit at a discounted rate (social discounting), in a manner similar to discounting of delayed rewards (delay discounting). Two experiments opposed the personal benefit from defection to the socially discounted benefit to others from cooperation. The benefit to others was determined from a social discount function relating the individual’s subjective value of a reward to another person and the social distance between that individual and the other person. In Experiment 1, the cost of cooperating was held constant while its social benefit was varied in terms of the number of other players, each gaining a fixed, hypothetical amount of money. In Experiment 2, the cost of cooperating was again held constant while the social benefit of cooperating was varied by the hypothetical amount of money earned by a single other player. In both experiments, significantly more participants cooperated when the social benefit was higher. PMID:23344990
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Can IR scene projectors reduce total system cost?
NASA Astrophysics Data System (ADS)
Ginn, Robert; Solomon, Steven
2006-05-01
There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.
Exploration Supply Chain Simulation
NASA Technical Reports Server (NTRS)
2008-01-01
The Exploration Supply Chain Simulation project was chartered by the NASA Exploration Systems Mission Directorate to develop a software tool, with proper data, to quantitatively analyze supply chains for future program planning. This tool is a discrete-event simulation that uses the basic supply chain concepts of planning, sourcing, making, delivering, and returning. This supply chain perspective is combined with other discrete or continuous simulation factors. Discrete resource events (such as launch or delivery reviews) are represented as organizational functional units. Continuous resources (such as civil service or contractor program functions) are defined as enabling functional units. Concepts of fixed and variable costs are included in the model to allow the discrete events to interact with cost calculations. The definition file is intrinsic to the model, but a blank start can be initiated at any time. The current definition file is an Orion Ares I crew launch vehicle. Parameters stretch from Kennedy Space Center across and into other program entities (Michaud Assembly Facility, Aliant Techsystems, Stennis Space Center, Johnson Space Center, etc.) though these will only gain detail as the file continues to evolve. The Orion Ares I file definition in the tool continues to evolve, and analysis from this tool is expected in 2008. This is the first application of such business-driven modeling to a NASA/government-- aerospace contractor endeavor.
Wingfield, Tom; Boccia, Delia; Tovar, Marco; Gavino, Arquímedes; Zevallos, Karine; Montoya, Rosario; Lönnroth, Knut; Evans, Carlton A
2014-07-01
Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed "catastrophic" but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs. From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2-4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%-43%) in the least-poor houses versus 48% (95% CI = 36%-50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%-61%] versus 38% [95% CI = 34%-41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7-15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3-3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00-1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1-2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%-28%), similar to that of MDR TB (20% [95% CI = 14%-25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain "dis-saving" variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients. Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease. Please see later in the article for the Editors' Summary.
Metsemakers, Willem-Jan; Smeets, Bart; Nijs, Stefaan; Hoekstra, Harm
2017-06-01
One of the most challenging complications in musculoskeletal trauma surgery is the development of infection after fracture fixation (IAFF). It can delay healing, lead to permanent functional loss, or even amputation of the affected limb. The main goal of this study was to investigate the total healthcare costs and length-of-stay (LOS) related to the surgical treatment of tibia fractures and furthermore identify the subset of clinical variables driving these costs within the Belgian healthcare system. The hypothesis was that deep infection would be the most important driver for total healthcare costs. Overall, 358 patients treated operatively for AO/OTA type 41, 42, and 43 tibia fractures between January 1, 2009 and January 1, 2014 were included in this study. A total of 26 clinical and process variables were defined. Calculated costs were limited to hospital care covered by the Belgian healthcare financing system. The five main cost categories studied were: honoraria, materials, hospitalization, day care admission, and pharmaceuticals. Multivariate analysis showed that deep infection was the most significant characteristic driving total healthcare costs and LOS related to the surgical treatment of tibia fractures. Furthermore, this complication resulted in the highest overall increase in total healthcare costs and LOS. Treatment costs were approximately 6.5-times higher compared to uninfected patients. This study shows the enormous hospital-related healthcare costs associated with IAFF of the tibia. Treatment costs for patients with deep infection are higher than previously mentioned in the literature. Therefore, future research should focus more on prevention rather than treatment strategies, not only to reduce patient morbidity but also to reduce the socio-economic impact. Copyright © 2017 Elsevier Ltd. All rights reserved.
Overhead Costs and Rates in the U.S. Defense Industrial Base. Volume 1
1980-10-01
Manager rather than to establish rigidly defined cost accounting structures. The conclusions to be own from the analysis were that overhead costs have...specific costs which make up tho overhead account ; whether management is controlling them; whether these costs are "reasonable" and the external factors... cost accounting structures, . ..... ....... .... ... ............................... , , ,’" ’ .. -17- and since there is no one accounting definition of
Toward scalable parts families for predictable design of biological circuits.
Lucks, Julius B; Qi, Lei; Whitaker, Weston R; Arkin, Adam P
2008-12-01
Our current ability to engineer biological circuits is hindered by design cycles that are costly in terms of time and money, with constructs failing to operate as desired, or evolving away from the desired function once deployed. Synthetic biologists seek to understand biological design principles and use them to create technologies that increase the efficiency of the genetic engineering design cycle. Central to the approach is the creation of biological parts--encapsulated functions that can be composited together to create new pathways with predictable behaviors. We define five desirable characteristics of biological parts--independence, reliability, tunability, orthogonality and composability, and review studies of small natural and synthetic biological circuits that provide insights into each of these characteristics. We propose that the creation of appropriate sets of families of parts with these properties is a prerequisite for efficient, predictable engineering of new function in cells and will enable a large increase in the sophistication of genetic engineering applications.
Bradley, Steven M; Strauss, Craig E; Ho, P Michael
2017-08-01
Healthcare value, defined as health outcomes achieved relative to the costs of care, has been proposed as a unifying approach to measure improvements in the quality and affordability of healthcare. Although value is of increasing interest to payers, many providers remain unfamiliar with how value differs from other approaches to the comparison of cost and outcomes (ie, cost-effectiveness analysis). While cost-effectiveness studies can be used by policy makers and payers to inform decisions about coverage and reimbursement for new therapies, the assessment of healthcare can guide improvements in the delivery of healthcare to achieve better outcomes at lower cost. Comparison on value allows for the identification of healthcare delivery organisations or care delivery settings where patient outcomes have been optimised at a lower cost. Gaps remain in the measurement of healthcare value, particularly as it relates to patient-reported health status (symptoms, functional status and health-related quality of life). The use of technology platforms that capture health status measures with minimal disruption to clinical workflow (ie, web portals, automated telephonic systems and tablets to facilitate capture outside of in-person clinical interaction) is facilitating use of health status measures to improve clinical care and optimise patient outcomes. Furthermore, the use of a value framework has catalysed quality improvement efforts and research to seek better patient outcomes at lower cost. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Impact of robotic technique and surgical volume on the cost of radical prostatectomy.
Hyams, Elias S; Mullins, Jeffrey K; Pierorazio, Phillip M; Partin, Alan W; Allaf, Mohamad E; Matlaga, Brian R
2013-03-01
Our present understanding of the effect of robotic surgery and surgical volume on the cost of radical prostatectomy (RP) is limited. Given the increasing pressures placed on healthcare resource utilization, such determinations of healthcare value are becoming increasingly important. Therefore, we performed a study to define the effect of robotic technology and surgical volume on the cost of RP. The state of Maryland mandates that all acute-care hospitals report encounter-level and hospital discharge data to the Health Service Cost Review Commission (HSCRC). The HSCRC was queried for men undergoing RP between 2008 and 2011 (the period during which robot-assisted laparoscopic radical prostatectomy [RALRP] was coded separately). High-volume hospitals were defined as >60 cases per year, and high-volume surgeons were defined as >40 cases per year. Multivariate regression analysis was performed to evaluate whether robotic technique and high surgical volume impacted the cost of RP. There were 1499 patients who underwent RALRP and 2565 who underwent radical retropubic prostatectomy (RRP) during the study period. The total cost for RALRP was higher than for RRP ($14,000 vs 10,100; P<0.001) based primarily on operating room charges and supply charges. Multivariate regression demonstrated that RALRP was associated with a significantly higher cost (β coeff 4.1; P<0.001), even within high-volume hospitals (β coeff 3.3; P<0.001). High-volume surgeons and high-volume hospitals, however, were associated with a significantly lower cost for RP overall. High surgeon volume was associated with lower cost for RALRP and RRP, while high institutional volume was associated with lower cost for RALRP only. High surgical volume was associated with lower cost of RP. Even at high surgical volume, however, the cost of RALRP still exceeded that of RRP. As robotic surgery has come to dominate the healthcare marketplace, strategies to increase the role of high-volume providers may be needed to improve the cost-effectiveness of prostate cancer surgical therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriger, A.
1978-01-31
This report is a part of the interim report documentation for the Global Spent Fuel Logistics System (GSFLS) study. The technical and financial considerations underlying a global spent fuel logistics systems have been studied and are reported. The Pacific Basin is used as a model throughout this report; however the stated methodology and, in many cases, considerations and conclusions are applicable to other global regions. Spent fuel discharge profiles for Pacific Basin Countries were used to determine the technical systems requirements for alternative concepts. Functional analyses and flows were generated to define both system design requirements and logistics parameters. Amore » technology review was made to ascertain the state-of-the-art of relevant GSFLS technical systems. Modular GSFLS facility designs were developed using the information generated from the functional analysis and technology review. The modular facility designs were used as a basis for siting and cost estimates for various GSFLS alternatives. Various GSFLS concepts were analyzed from a financial and economic perspective in order to provide total concepts costs and ascertain financial and economic sensitivities to key GSFLS variations. Results of the study include quantification of GSFLS facility and hardware requirements; drawings of relevant GSFLS facility designs; system cost estimates; financial reports - including user service charges; and comparative analyses of various GSFLS alternatives.« less
The HSA in Your Future: Defined Contribution Retiree Medical Coverage.
Towarnicky, Jack M
In 2004, when evaluating health savings account (HSA) business opportunities, I predicted: "Twenty-five years ago, no one had ever heard of 401(k); 25 years from now, everyone will have an HSA." Twelve years later, growth in HSA eligibility, participation, contributions and asset accumulations suggests we just might achieve that prediction. This article shares one plan sponsor's journey to help employees accumulate assets to fund medical costs-while employed and after retirement, It documents a 30-plus-year retiree health insurance transition from a defined benefit to a defined dollar structure and culminating in a full-replacement defined contribution structure using HSA-qualifying high-deductible health plans (HDHPs) and then redeploying/repurposing the HSA to incorporate a savings incentive for retiree medical costs.
Implementing Target Value Design.
Alves, Thais da C L; Lichtig, Will; Rybkowski, Zofia K
2017-04-01
An alternative to the traditional way of designing projects is the process of target value design (TVD), which takes different departure points to start the design process. The TVD process starts with the client defining an allowable cost that needs to be met by the design and construction teams. An expected cost in the TVD process is defined through multiple interactions between multiple stakeholders who define wishes and others who define ways of achieving these wishes. Finally, a target cost is defined based on the expected profit the design and construction teams are expecting to make. TVD follows a series of continuous improvement efforts aimed at reaching the desired goals for the project and its associated target value cost. The process takes advantage of rapid cycles of suggestions, analyses, and implementation that starts with the definition of value for the client. In the traditional design process, the goal is to identify user preferences and find solutions that meet the needs of the client's expressed preferences. In the lean design process, the goal is to educate users about their values and advocate for a better facility over the long run; this way owners can help contractors and designers to identify better solutions. This article aims to inform the healthcare community about tools and techniques commonly used during the TVD process and how they can be used to educate and support project participants in developing better solutions to meet their needs now as well as in the future.
[Quality assurance in intensive care: the situation in Switzerland].
Frutiger, A
1999-10-30
The movement for quality in medicine is starting to take on the dimensions of a crusade. Quite logically it has also reached the intensive care community. Due to their complex multidisciplinary functioning and because of the high costs involved, ICUs are model services reflecting the overall situation in our hospitals. The situation of Swiss intensive care is particularly interesting, because for over 25 years standards for design and staffing of Swiss ICUs have been in effect and were enforced via onsite visits by the Swiss Society of Intensive Care without government involvement. Swiss intensive care thus defined its structures long before the word "accreditation" had even been used in this context. While intensive care in Switzerland is practised in clearly defined, well equipped and adequately staffed units, much less is known about process quality and outcomes of these services. Statistics on admissions, length of stay and length of mechanical ventilation, as well as severity data based on a simple classification system, are collected nationwide and allow some limited insight into the overall process of care. Results of intensive care are not systematically assessed. In response to the constant threat of cost containment, Swiss ICUs should increasingly focus on process quality and results, while maintaining their existing good structures.
A Biomimetic-Computational Approach to Optimizing the Quantum Efficiency of Photovoltaics
NASA Astrophysics Data System (ADS)
Perez, Lisa M.; Holzenburg, Andreas
The most advanced low-cost organic photovoltaic cells have a quantum efficiency of 10%. This is in stark contrast to plant/bacterial light-harvesting systems which offer quantum efficiencies close to unity. Of particular interest is the highly effective quantum coherence-enabled energy transfer (Fig. 1). Noting that quantum coherence is promoted by charged residues and local dielectrics, classical atomistic simulations and time-dependent density functional theory (DFT) are used to identify charge/dielectric patterns and electronic coupling at exactly defined energy transfer interfaces. The calculations make use of structural information obtained on photosynthetic protein-pigment complexes while still in the native membrane making it possible to establish a link between supramolecular organization and quantum coherence in terms of what length scales enable fast energy transport and prevent quenching. Calculating energy transfer efficiencies between components based on different proximities will permit the search for patterns that enable defining material properties suitable for advanced photovoltaics.
Embedded CLIPS for SDI BM/C3 simulation and analysis
NASA Technical Reports Server (NTRS)
Gossage, Brett; Nanney, Van
1990-01-01
Nichols Research Corporation is developing the BM/C3 Requirements Analysis Tool (BRAT) for the U.S. Army Strategic Defense Command. BRAT uses embedded CLIPS/Ada to model the decision making processes used by the human commander of a defense system. Embedding CLlPS/Ada in BRAT allows the user to explore the role of the human in Command and Control (C2) and the use of expert systems for automated C2. BRAT models assert facts about the current state of the system, the simulated scenario, and threat information into CLIPS/Ada. A user-defined rule set describes the decision criteria for the commander. We have extended CLIPS/Ada with user-defined functions that allow the firing of a rule to invoke a system action such as weapons release or a change in strategy. The use of embedded CLIPS/Ada will provide a powerful modeling tool for our customer at minimal cost.
Zhang, Shuang-Yuan; Guan, Guijian; Jiang, Shan; Guo, Hongchen; Xia, Jing; Regulacio, Michelle D; Wu, Mingda; Shah, Kwok Wei; Dong, Zhili; Zhang, Jie; Han, Ming-Yong
2015-09-30
Throughout history earth-abundant copper has been incorporated into textiles and it still caters to various needs in modern society. In this paper, we present a two-step copper metallization strategy to realize sequentially nondiffusive copper(II) patterning and rapid copper deposition on various textile materials, including cotton, polyester, nylon, and their mixtures. A new, cost-effective formulation is designed to minimize the copper pattern migration on textiles and to achieve user-defined copper patterns. The metallized copper is found to be very adhesive and stable against washing and oxidation. Furthermore, the copper-metallized textile exhibits excellent electrical conductivity that is ~3 times better than that of stainless steel and also inhibits the growth of bacteria effectively. This new copper metallization approach holds great promise as a commercially viable method to metallize an insulating textile, opening up research avenues for wearable electronics and functional garments.
Structural tailoring of advanced turboprops
NASA Technical Reports Server (NTRS)
Brown, K. W.; Hopkins, Dale A.
1988-01-01
The Structural Tailoring of Advanced Turboprops (STAT) computer program was developed to perform numerical optimization on highly swept propfan blades. The optimization procedure seeks to minimize an objective function defined as either: (1) direct operating cost of full scale blade or, (2) aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analysis system includes an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution forced response life prediction capability. STAT includes all relevant propfan design constraints.
NASA Technical Reports Server (NTRS)
Chie, C. M.; White, M. A.; Lindsey, W. C.; Davarian, F.; Dixon, R. C.
1984-01-01
Functional requirements and specifications are defined for an autonomous integrated receive system (AIRS) to be used as an improvement in the current tracking and data relay satellite system (TDRSS), and as a receiving system in the future tracking and data acquisition system (TDAS). The AIRS provides improved acquisition, tracking, bit error rate (BER), RFI mitigation techniques, and data operations performance compared to the current TDRSS ground segment receive system. A computer model of the AIRS is used to provide simulation results predicting the performance of AIRS. Cost and technology assessments are included.
Large Field Visualization with Demand-Driven Calculation
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Henze, Chris
1999-01-01
We present a system designed for the interactive definition and visualization of fields derived from large data sets: the Demand-Driven Visualizer (DDV). The system allows the user to write arbitrary expressions to define new fields, and then apply a variety of visualization techniques to the result. Expressions can include differential operators and numerous other built-in functions, ail of which are evaluated at specific field locations completely on demand. The payoff of following a demand-driven design philosophy throughout becomes particularly evident when working with large time-series data, where the costs of eager evaluation alternatives can be prohibitive.
Structural Tailoring of Advanced Turboprops (STAT)
NASA Technical Reports Server (NTRS)
Brown, Kenneth W.
1988-01-01
This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.
Space station systems technology study (add-on task). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1985-01-01
System concepts were characterized in order to define cost versus benefits for autonomous functional control and for controls and displays for OMV, OTV, and spacecraft servicing and operation. The attitude control topic focused on characterizing the Space Station attitude control problem through simulation of control system responses to structural disturbances. The first two topics, mentioned above, focused on specific technology items that require advancement in order to support an early 1990s initial launch of a Space Station, while the attitude control study was an exploration of the capability of conventional controller techniques.
26 CFR 1.925(b)-1T - Temporary regulations; marginal costing rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... rules—(1) In general. Marginal costing is a method under which only direct production costs of producing... combined taxable income of the FSC and its related supplier under section 925(a)(2). The costs to be taken into account are the related supplier's direct material and labor costs (as defined in § 1.471-11(b)(2...
Cost-aware request routing in multi-geography cloud data centres using software-defined networking
NASA Astrophysics Data System (ADS)
Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei
2017-03-01
Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.
Determining Functional Reliability of Pyrotechnic Mechanical Devices
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Multhaup, Herbert A.
1997-01-01
This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.
ERIC Educational Resources Information Center
Macy, Barry A.; Mirvis, Philip H.
1982-01-01
A standardized methodology for identifying, defining, and measuring work behavior and performance rather than production, and a methodology that estimates the costs and benefits of work innovation are presented for assessing organizational effectiveness and program costs versus benefits in organizational change programs. Factors in a cost-benefit…
A Cost Element Structure for Defense Training. Final Report.
ERIC Educational Resources Information Center
Knapp, Mark I.; Orlansky, Jesse
This paper identifies, structures, and defines a list of cost elements that is intended to describe fully the life-cycle cost of any formal program, course, or device for individual training of Department of Defense personnel. It was developed to provide consistent, comparable, and credible evaluations of the cost-effectiveness of alternative…
Manual for Reducing Educational Unit Costs in Latin American Countries.
ERIC Educational Resources Information Center
Centro Multinacional de Investigacion Educativa, San Jose (Costa Rica).
Designed for educational administrators, this manual provides suggestions for reducing educational unit costs in Latin America without reducing the quality of the education. Chapter one defines unit cost concepts and compares the costs of the Latin American countries. Chapter two deals with the different policies which could affect the principal…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-09
... Agency does not have a general formula that is used for the life cycle cost analysis. The life cycle cost... analysis is defined in 7 CFR 3560.11. The Agency reviews the results of the life cycle cost analysis for... cost analysis. The Agency agrees that the reserve accounts based on a percentage can be underfunded and...
ERIC Educational Resources Information Center
Merrifield, John
2009-01-01
Studies of existing best practices cannot determine whether the current "best" schooling practices could be even better, less costly, or more effective and/or improve at a faster rate, but we can discover a cost effective menu of schooling options and each item's minimum cost through market accountability experiments. This paper describes…
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
Au, Jennifer; Rudmik, Luke
2013-09-01
The time-driven activity-based costing (TD-ABC) method is a novel approach to quantify the costs of a complex system. The aim of this study was to apply the TD-ABC technique to define the overall cost of a routine outpatient endoscopic sinus surgery (ESS) from the perspective of the Canadian government payer. Costing perspective was the Canadian government payer. All monetary values are in Canadian dollars as of December 2012. Costs were obtained by contacting staff unions, reviewing purchasing databases and provincial physician fee schedules. Practical capacity time values were collected from the College and Association of Registered Nurses of Alberta. Capacity cost rates ($/min) were calculated for all staff, capital equipment, and hospital space. The overall cost for routine outpatient ESS was $3510.31. The cost per ESS case for each clinical pathway encounter was as follows: preoperative holding ($49.19); intraoperative ($3296.60); sterilization ($90.20); postanesthesia care unit ($28.64); and postoperative day ward ($45.68). The 3 major cost drivers were physician fees, disposable equipment, and nursing costs. The intraoperative phase contributed to 94.5% of the overall cost. This study applied the TD-ABC method to evaluate the cost of outpatient ESS from the perspective of the Canadian government payer and defined the overall cost to be $3510.31 per case. © 2013 ARS-AAOA, LLC.
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
Design and Synthesis of Multigraft Copolymer Thermoplastic Elastomers: Superelastomers
Wang, Huiqun; Lu, Wei; Wang, Weiyu; ...
2017-09-28
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Bottom-up production of meta-atoms for optical magnetism in visible and NIR light
NASA Astrophysics Data System (ADS)
Barois, Philippe; Ponsinet, Virginie; Baron, Alexandre; Richetti, Philippe
2018-02-01
Many unusual optical properties of metamaterials arise from the magnetic response of engineered structures of sub-wavelength size (meta-atoms) exposed to light. The top-down approach whereby engineered nanostructure of well-defined morphology are engraved on a surface proved to be successful for the generation of strong optical magnetism. It faces however the limitations of high cost and small active area in visible light where nanometre resolution is needed. The bottom-up approach whereby the fabrication metamaterials of large volume or large area results from the combination of nanochemitry and self-assembly techniques may constitute a cost-effective alternative. This approach nevertheless requires the large-scale production of functional building-blocks (meta-atoms) bearing a strong magnetic optical response. We propose in this paper a few tracks that lead to the large scale synthesis of magnetic metamaterials operating in visible or near IR light.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqun; Lu, Wei; Wang, Weiyu
Thermoplastic elastomers (TPEs) have been widely studied because of their recyclability, good processibility, low production cost, and unique performance. The building of graft-type architectures can greatly improve mechanical properties of TPEs. This review focuses on the advances in different approaches to synthesize multigraft copolymer TPEs. Anionic polymerization techniques allow for the synthesis of well-defined macromolecular structures and compositions, with great control over the molecular weight, polydispersity, branch spacing, number of branch points, and branch point functionality. Progress in emulsion polymerization offers potential approaches to commercialize these types of materials with low production cost via simple operations. Moreover, the use ofmore » multigraft architecturesprovides a solution to the limited elongational properties of all-acrylic TPEs, which can greatly expand their potential application range. The combination of different polymerization techniques, the introduction of new chemical compositions, and the incorporation of sustainable sources are expected to be further investigated in this area in coming years.« less
Inducing morphological changes in lipid bilayer membranes with microfabricated substrates
NASA Astrophysics Data System (ADS)
Liu, Fangjie; Collins, Liam F.; Ashkar, Rana; Heberle, Frederick A.; Srijanto, Bernadeta R.; Collier, C. Patrick
2016-11-01
Lateral organization of lipids and proteins into distinct domains and anchoring to a cytoskeleton are two important strategies employed by biological membranes to carry out many cellular functions. However, these interactions are difficult to emulate with model systems. Here we use the physical architecture of substrates consisting of arrays of micropillars to systematically control the behavior of supported lipid bilayers - an important step in engineering model lipid membrane systems with well-defined functionalities. Competition between attractive interactions of supported lipid bilayers with the underlying substrate versus the energy cost associated with membrane bending at pillar edges can be systematically investigated as functions of pillar height and pitch, chemical functionalization of the microstructured substrate, and the type of unilamellar vesicles used for assembling the supported bilayer. Confocal fluorescent imaging and AFM measurements highlight correlations that exist between topological and mechanical properties of lipid bilayers and lateral lipid mobility in these confined environments. This study provides a baseline for future investigations into lipid domain reorganization on structured solid surfaces and scaffolds for cell growth.
24 CFR 700.115 - Program costs.
Code of Federal Regulations, 2010 CFR
2010-04-01
... nursing services, and other institutional forms of service, care or support; (vii) Occupational therapy and vocational rehabilitation services; or (viii) Other items defined as unallowable costs elsewhere...
Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Hutubessy, Raymond C W; Verguet, Stéphane; Jit, Mark
2018-03-01
Cost-effectiveness thresholds (CETs) based on the Commission on Macroeconomics and Health (CMH) are extensively used in low- and middle-income countries (LMICs) lacking locally defined CETs. These thresholds were originally intended for global and regional prioritization, and do not reflect local context or affordability at the national level, so their value for informing resource allocation decisions has been questioned. Using these thresholds, rotavirus vaccines are widely regarded as cost-effective interventions in LMICs. However, high vaccine prices remain a barrier towards vaccine introduction. This study aims to evaluate the cost-effectiveness, affordability and threshold price of universal rotavirus vaccination at various CETs in Malaysia. Cost-effectiveness of Rotarix and RotaTeq were evaluated using a multi-cohort model. Pan American Health Organization Revolving Fund's vaccine prices were used as tender price, while the recommended retail price for Malaysia was used as market price. We estimate threshold prices defined as prices at which vaccination becomes cost-effective, at various CETs reflecting economic theories of human capital, societal willingness-to-pay and marginal productivity. A budget impact analysis compared programmatic costs with the healthcare budget. At tender prices, both vaccines were cost-saving. At market prices, cost-effectiveness differed with thresholds used. At market price, using 'CMH thresholds', Rotarix programmes were cost-effective and RotaTeq were not cost-effective from the healthcare provider's perspective, while both vaccines were cost-effective from the societal perspective. Using other CETs, both vaccines were not cost-effective at market price, from the healthcare provider's and societal perspectives. At tender and cost-effective prices, rotavirus vaccination cost ∼1 and 3% of the public health budget, respectively. Using locally defined thresholds, rotavirus vaccination is cost-effective at vaccine prices in line with international tenders, but not at market prices. Thresholds representing marginal productivity are likely to be lower than those reflecting human capital and individual preference measures, and may be useful in determining affordable vaccine prices. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Lima, Fabiano F.; Camillo, Carlos A.; Gobbo, Luis A.; Trevisan, Iara B.; Nascimento, Wesley B. B. M.; Silva, Bruna S. A.; Lima, Manoel C. S.; Ramos, Dionei; Ramos, Ercy M. C.
2018-01-01
The objectives of the study were to compare the effects of resistance training using either a low cost and portable elastic tubing or conventional weight machines on muscle force, functional exercise capacity, and health-related quality of life (HRQOL) in middle-aged to older healthy adults. In this clinical trial twenty-nine middle-aged to older healthy adults were randomly assigned to one of the three groups a priori defined: resistance training with elastic tubing (ETG; n = 10), conventional resistance training (weight machines) (CTG; n = 9) and control group (CG, n = 10). Both ETG and CTG followed a 12-week resistance training (3x/week - upper and lower limbs). Muscle force, functional exercise capacity and HRQOL were evaluated at baseline, 6 and 12 weeks. CG underwent the three evaluations with no formal intervention or activity counseling provided. ETG and CTG increased similarly and significantly muscle force (Δ16-44% in ETG and Δ25-46% in CTG, p < 0.05 for both), functional exercise capacity (ETG Δ4 ± 4% and CTG Δ6±8%; p < 0.05 for both). Improvement on “pain” domain of HRQOL could only be observed in the CTG (Δ21 ± 26% p = 0.037). CG showed no statistical improvement in any of the variables investigated. Resistance training using elastic tubing (a low cost and portable tool) and conventional resistance training using weight machines promoted similar positive effects on peripheral muscle force and functional exercise capacity in middle-aged to older healthy adults. Key points There is compeling evidence linking resistance training to health. Elastic resistance training improves the functionality of middle-aged to older healthy adults. Elastic resistance training was shown to be as effective as conventional resistence training in middle-aged to older healthy adults. PMID:29535589
Backward assembly planning with DFA analysis
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1992-01-01
An assembly planning system that operates based on a recursive decomposition of assembly into subassemblies is presented. The planning system analyzes assembly cost in terms of stability, directionality, and manipulability to guide the generation of preferred assembly plans. The planning in this system incorporates the special processes, such as cleaning, testing, labeling, etc., that must occur during the assembly. Additionally, the planning handles nonreversible, as well as reversible, assembly tasks through backward assembly planning. In order to decrease the planning efficiency, the system avoids the analysis of decompositions that do not correspond to feasible assembly tasks. This is achieved by grouping and merging those parts that can not be decomposable at the current stage of backward assembly planning due to the requirement of special processes and the constraint of interconnection feasibility. The invention includes methods of evaluating assembly cost in terms of the number of fixtures (or holding devices) and reorientations required for assembly, through the analysis of stability, directionality, and manipulability. All these factors are used in defining cost and heuristic functions for an AO* search for an optimal plan.
The New Meteor Radar at Penn State: Design and First Observations
NASA Technical Reports Server (NTRS)
Urbina, J.; Seal, R.; Dyrud, L.
2011-01-01
In an effort to provide new and improved meteor radar sensing capabilities, Penn State has been developing advanced instruments and technologies for future meteor radars, with primary objectives of making such instruments more capable and more cost effective in order to study the basic properties of the global meteor flux, such as average mass, velocity, and chemical composition. Using low-cost field programmable gate arrays (FPGAs), combined with open source software tools, we describe a design methodology enabling one to develop state-of-the art radar instrumentation, by developing a generalized instrumentation core that can be customized using specialized output stage hardware. Furthermore, using object-oriented programming (OOP) techniques and open-source tools, we illustrate a technique to provide a cost-effective, generalized software framework to uniquely define an instrument s functionality through a customizable interface, implemented by the designer. The new instrument is intended to provide instantaneous profiles of atmospheric parameters and climatology on a daily basis throughout the year. An overview of the instrument design concepts and some of the emerging technologies developed for this meteor radar are presented.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Cannon, I.; Balcer, S.; Cochran, M.; Klop, J.; Peterson, S.
1991-01-01
An Integrated Control and Health Monitoring (ICHM) system was conceived for use on a 20 Klb thrust baseline Orbit Transfer Vehicle (OTV) engine. Considered for space used, the ICHM was defined for reusability requirements for an OTV engine service free life of 20 missions, with 100 starts and a total engine operational time of 4 hours. Functions were derived by flowing down requirements from NASA guidelines, previous OTV engine or ICHM documents, and related contracts. The elements of an ICHM were identified and listed, and these elements were described in sufficient detail to allow estimation of their technology readiness levels. These elements were assessed in terms of technology readiness level, and supporting rationale for these assessments presented. The remaining cost for development of a minimal ICHM system to technology readiness level 6 was estimated. The estimates are within an accuracy range of minus/plus 20 percent. The cost estimates cover what is needed to prepare an ICHM system for use on a focussed testbed for an expander cycle engine, excluding support to the actual test firings.
47 CFR 27.1164 - The cost-sharing formula.
Code of Federal Regulations, 2014 CFR
2014-10-01
... systems; Heating Ventilation and Air Conditioning (HVAC) (if required); alternate transport equipment; and.../path survey); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning... defined as the actual costs associated with providing a replacement system, such as equipment and...
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Defined contribution: a part of our future.
Baugh, Reginald F.
2003-01-01
Rising employer health care costs and consumer backlash against managed care are trends fostering the development of defined contribution plans. Defined contribution plans limit employer responsibility to a fixed financial contribution rather than a benefit program and dramatically increase consumer responsibility for health care decision making. Possible outcomes of widespread adoption of defined contribution plans are presented. PMID:12934869
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
Wingfield, Tom; Boccia, Delia; Tovar, Marco; Gavino, Arquímedes; Zevallos, Karine; Montoya, Rosario; Lönnroth, Knut; Evans, Carlton A.
2014-01-01
Background Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed “catastrophic” but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs. Methods and Findings From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2–4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%–43%) in the least-poor houses versus 48% (95% CI = 36%–50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%–61%] versus 38% [95% CI = 34%–41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7–15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3–3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00–1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1–2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%–28%), similar to that of MDR TB (20% [95% CI = 14%–25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain “dis-saving” variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients. Conclusions Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease. Please see later in the article for the Editors' Summary PMID:25025331
Definition of a prospective payment system to reimburse emergency departments.
Levaggi, Rosella; Montefiori, Marcello
2013-10-11
Payers are increasingly turning to Prospective Payment Systems (PPSs) because they incentivize efficiency, but their application to emergency departments (EDs) is difficult because of the high level of uncertainty and variability in the cost of treating each patient.To the best of our knowledge, our work represents the first attempt at defining a PPS for this part of hospital activity. Data were specifically collected for this study and relate to 1011 patients who were triaged at an ED of a major Italian hospital, during 1 week in December 2010.The cost for each patient was analytically estimated by adding up several components: 1) physician and other staff costs that were imputed on the basis of the time each physician claimed to have spent treating the patient; 2) the cost for each test/treatment each patient actually underwent; 3) overhead costs, shared among patients using the time elapsed between first examination and discharge from the ED. The distribution of costs by triage code shows that, although the average cost increases across the four triage groups, the variance within each code is quite high. The maximum cost for a yellow code is €1074.7, compared with €680 for red, the most serious code. Using cluster analysis, the red code cluster is enveloped by yellow, and their costs are therefore indistinguishable, while green codes span all cost groups. This suggests that triage code alone is not a good proxy for the patient cost, and that other cost drivers need to be included. Crude triage codes cannot be used to define PPSs because they are not sufficiently correlated with costs and are characterized by large variances. However, if combined with other information, such as the number of laboratory and non-laboratory tests/examinations, it is possible to define cost groups that are sufficiently homogeneous to be reimbursed prospectively. This should discourage strategic behavior and allow the ED to break even or create profits, which can be reinvested to improve services. The study provides health policy administrators with a new and feasible tool to implement prospective payment for EDs, and improve planning and cost control.
NASA Technical Reports Server (NTRS)
Stretchberry, D. M.; Hein, G. F.
1972-01-01
The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.
Communications network design and costing model technical manual
NASA Technical Reports Server (NTRS)
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
This computer model provides the capability for analyzing long-haul trunking networks comprising a set of user-defined cities, traffic conditions, and tariff rates. Networks may consist of all terrestrial connectivity, all satellite connectivity, or a combination of terrestrial and satellite connectivity. Network solutions provide the least-cost routes between all cities, the least-cost network routing configuration, and terrestrial and satellite service cost totals. The CNDC model allows analyses involving three specific FCC-approved tariffs, which are uniquely structured and representative of most existing service connectivity and pricing philosophies. User-defined tariffs that can be variations of these three tariffs are accepted as input to the model and allow considerable flexibility in network problem specification. The resulting model extends the domain of network analysis from traditional fixed link cost (distance-sensitive) problems to more complex problems involving combinations of distance and traffic-sensitive tariffs.
Adult age differences in task switching.
Kray, J; Lindenberger, U
2000-03-01
Age differences in 2 components of task-set switching speed were investigated in 118 adults aged 20 to 80 years using task-set homogeneous (e.g., AAAA ...) and task-set heterogeneous (e.g., AABBAABB ... ) blocks. General switch costs were defined as latency differences between heterogeneous and homogeneous blocks. whereas specific switch costs were defined as differences between switch and nonswitch trials within heterogeneous blocks. Both types of costs generalized over verbal, figural, and numeric stimulus materials; were more highly correlated to fluid than to crystallized abilities; and were not eliminated after 6 sessions of practice, indicating that they reflect basic and domain-general aspects of cognitive control. Most important, age-associated increments in costs were significantly greater for general than for specific switch costs, suggesting that the ability to efficiently maintain and coordinate 2 alternating task sets in working memory instead of 1 is more negatively affected by advancing age than the ability to execute the task switch itself.
Public Choices, Private Costs: An Analysis of Spending and Achievement in Ohio Public Schools.
ERIC Educational Resources Information Center
Damask, James; Lawson, Robert
This report sets up a structure for examining the real costs of public education. It defines three approaches of gathering and reporting cost information: narrow (salaries and current expenditures, excluding capital outlays); generally accepted accounting principles (GAAP) (costs are recorded during the period in which they occur); and broad (all…
Science-Driven Innovation Can Reduce Wind Energy Costs by 50% by 2030 |
-technology innovations, the unsubsidized cost of wind energy could drop to 50% of current levels, equivalent resulting innovations enabled by advances in science will impact the levelized cost of energy (defined as the total cost of installing and operating a project per kilowatt-hour of electricity generated by the
Photovoltaic frequency–watt curve design for frequency regulation and fast contingency reserves
Johnson, Jay; Neely, Jason C.; Delhotal, Jarod J.; ...
2016-09-02
When renewable energy resources are installed in electricity grids, they typically increase generation variability and displace thermal generator control action and inertia. Grid operators combat these emerging challenges with advanced distributed energy resource (DER) functions to support frequency and provide voltage regulation and protection mechanisms. This paper focuses on providing frequency reserves using autonomous IEC TR 61850-90-7 pointwise frequency-watt (FW) functions that adjust DER active power as a function of measured grid frequency. The importance of incorporating FW functions into a fleet of photovoltaic (PV) systems is demonstrated in simulation. Effects of FW curve design, including curtailment, deadband, and droop,more » were analyzed against performance metrics using Latin hypercube sampling for 20%, 70%, and 120% PV penetration scenarios on the Hawaiian island of Lanai. Finally, to understand the financial implications of FW functions to utilities, a performance function was defined based on monetary costs attributable to curtailed PV production, load shedding, and generator wear. An optimization wrapper was then created to find the best FW function curve for each penetration level. Lastly, it was found that in all cases, the utility would save money by implementing appropriate FW functions.« less
Vanderstraeten, Barbara; Verstraete, Jan; De Croock, Roger; De Neve, Wilfried; Lievens, Yolande
2014-05-01
To determine the treatment cost and required reimbursement for a new hadron therapy facility, considering different technical solutions and financing methods. The 3 technical solutions analyzed are a carbon only (COC), proton only (POC), and combined (CC) center, each operating 2 treatment rooms and assumed to function at full capacity. A business model defines the required reimbursement and analyzes the financial implications of setting up a facility over time; activity-based costing (ABC) calculates the treatment costs per type of patient for a center in a steady state of operation. Both models compare a private, full-cost approach with public sponsoring, only taking into account operational costs. Yearly operational costs range between €10.0M (M = million) for a publicly sponsored POC to €24.8M for a CC with private financing. Disregarding inflation, the average treatment cost calculated with ABC (COC: €29,450; POC: €46,342; CC: €46,443 for private financing; respectively €16,059, €28,296, and €23,956 for public sponsoring) is slightly lower than the required reimbursement based on the business model (between €51,200 in a privately funded POC and €18,400 in COC with public sponsoring). Reimbursement for privately financed centers is very sensitive to a delay in commissioning and to the interest rate. Higher throughput and hypofractionation have a positive impact on the treatment costs. Both calculation methods are valid and complementary. The financially most attractive option of a publicly sponsored COC should be balanced to the clinical necessities and the sociopolitical context. Copyright © 2014 Elsevier Inc. All rights reserved.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
Graph-based surface reconstruction from stereo pairs using image segmentation
NASA Astrophysics Data System (ADS)
Bleyer, Michael; Gelautz, Margrit
2005-01-01
This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.
Direct 3D-printing of cell-laden constructs in microfluidic architectures.
Liu, Justin; Hwang, Henry H; Wang, Pengrui; Whang, Grace; Chen, Shaochen
2016-04-21
Microfluidic platforms have greatly benefited the biological and medical fields, however standard practices require a high cost of entry in terms of time and energy. The utilization of three-dimensional (3D) printing technologies has greatly enhanced the ability to iterate and build functional devices with unique functions. However, their inability to fabricate within microfluidic devices greatly increases the cost of producing several different devices to examine different scientific questions. In this work, a variable height micromixer (VHM) is fabricated using projection 3D-printing combined with soft lithography. Theoretical and flow experiments demonstrate that altering the local z-heights of VHM improved mixing at lower flow rates than simple geometries. Mixing of two fluids occurs as low as 320 μL min(-1) in VHM whereas the planar zigzag region requires a flow rate of 2.4 mL min(-1) before full mixing occurred. Following device printing, to further demonstrate the ability of this projection-based method, complex, user-defined cell-laden scaffolds are directly printed inside the VHM. The utilization of this unique ability to produce 3D tissue models within a microfluidic system could offer a unique platform for medical diagnostics and disease modeling.
Instructions for Plastic Encapsulated Microcircuit(PEM) Selection, Screening and Qualification.
NASA Technical Reports Server (NTRS)
King, Terry; Teverovsky, Alexander; Leidecker, Henning
2002-01-01
The use of Plastic Encapsulated Microcircuits (PEMs) is permitted on NASA Goddard Space Flight Center (GSFC) spaceflight applications, provided each use is thoroughly evaluated for thermal, mechanical, and radiation implications of the specific application and found to meet mission requirements. PEMs shall be selected for their functional advantage and availability, not for cost saving; the steps necessary to ensure reliability usually negate any initial apparent cost advantage. A PEM shall not be substituted for a form, fit and functional equivalent, high reliability, hermetic device in spaceflight applications. Due to the rapid change in wafer-level designs typical of commercial parts and the unknown traceability between packaging lots and wafer lots, lot specific testing is required for PEMs, unless specifically excepted by the Mission Assurance Requirements (MAR) for the project. Lot specific qualification, screening, radiation hardness assurance analysis and/or testing, shall be consistent with the required reliability level as defined in the MAR. Developers proposing to use PEMs shall address the following items in their Performance Assurance Implementation Plan: source selection (manufacturers and distributors), storage conditions for all stages of use, packing, shipping and handling, electrostatic discharge (ESD), screening and qualification testing, derating, radiation hardness assurance, test house selection and control, data collection and retention.
Hybrid architecture for building secure sensor networks
NASA Astrophysics Data System (ADS)
Owens, Ken R., Jr.; Watkins, Steve E.
2012-04-01
Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.
Morbidity, mortality and economic burden of renal impairment in cardiac intensive care.
Chew, D P; Astley, C; Molloy, D; Vaile, J; De Pasquale, C G; Aylward, P
2006-03-01
Moderate to severe impairment of renal function has emerged as a potent risk factor for adverse short- and long-term outcomes among patients presenting with cardiac disease. We sought to define the clinical, late mortality and economic burden of this risk factor among patients presenting to cardiac intensive care. A clinical audit of patients presenting to cardiac intensive care was undertaken between July 2002 and June 2003. All patients presenting with cardiac diagnoses were included in the study. Baseline creatinine levels were assessed in all patients. Late mortality was assessed by the interrogation of the National Death Register. Renal impairment was defined as estimated glomerular filtration rate <60 mL/min per 1.73 m2, as calculated by the Modified Diet in Renal Disease formula. In-hospital and late outcomes were compared by Cox proportional hazards modelling, adjusting for known confounders. A matched analysis and attributable risk calculation were undertaken to assess the proportion of late mortality accounted for by impairment of renal function and other known negative prognostic factors. The in-hospital total cost associated with renal impairment was assessed by linear regression. Glomerular filtration rate <60 mL/min per 1.73 m2 was evident in 33.0% of this population. Among these patients, in-hospital and late mortality were substantially increased: risk ratio 13.2; 95% CI 3.0-58.1; P < 0.001 and hazard ratio 6.2; 95% CI 3.6-10.7; P < 0.001, respectively. In matched analysis, renal impairment to this level was associated with 42.1% of all the late deaths observed. Paradoxically, patients with renal impairment were more conservatively managed, but their hospitalizations were associated with an excess adjusted in-hospital cost of $A1676. Impaired renal function is associated with a striking clinical and economic burden among patients presenting to cardiac intensive care. As a marker for future risk, renal function accounts for a substantial proportion of the burden of late mortality. The burden of risk suggests a greater potential opportunity for improvement of outcomes through optimisation of therapeutic strategies.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
Self-imposed length limits in recreational fisheries
Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.
2014-01-01
A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2016-01-01
Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2015-05-17
Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.
Schnitzler, Mark A; Johnston, Karissa; Axelrod, David; Gheorghian, Adrian; Lentine, Krista L
2011-06-27
Improved early kidney transplant outcomes limit the contemporary utility of standard clinical endpoints. Quantifying the relationship of renal function at 1 year after transplant with subsequent clinical outcomes and healthcare costs may facilitate cost-benefit evaluations among transplant recipients. Data for Medicare-insured kidney-only transplant recipients (1995-2003) were drawn from the United States Renal Data System. Associations of estimated glomerular filtration rate (eGFR) level at the first transplant anniversary with subsequent death-censored graft failure and patient death in posttransplant years 1 to 3 and 4 to 7 were examined by parametric survival analysis. Associations of eGFR with total health care costs defined by Medicare payments were assessed with multivariate linear regression. Among 38,015 participants, first anniversary eGFR level demonstrated graded associations with subsequent outcomes. Compared with patients with 12-month eGFR more than or equal to 60 mL/min/1.73 m, the adjusted relative risk of death-censored graft failure in years 1 to 3 was 31% greater for eGFR 45 to 59 mL/min/1.73 m (P<0.0001) and 622% greater for eGFR 15 to 30 mL/min/1.73 m (P<0.0001). Associations of first anniversary eGFR level with graft failure and mortality remained significant in years 4 to 7. The proportions of recipients expected to return to dialysis or die attributable to eGFR less than 60 mL/min/1.73 m over 10 years were 23.1% and 9.4%, respectively, and were significantly higher than proportions attributable to delayed graft function or acute rejection. Reduced eGFR was associated with graded and significant increases in health care spending during years 2 and 3 after transplant (P<0.0001). eGFR is strongly associated with clinical and economic outcomes after kidney transplantation.
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Stevens, V G; Hibbert, C L; Edbrooke, D L
1998-10-01
This study analyses the relationship between the actual patient-related costs of care calculated for 145 patients admitted sequentially to an adult general intensive care unit and a number of factors obtained from a previously described consensus of opinion study. The factors identified in the study were suggested as potential descriptors for the casemix in an intensive care unit that could be used to predict the costs of care. Significant correlations between the costs of care and severity of illness, workload and length of stay were found but these failed to predict the costs of care with sufficient accuracy to be used in isolation to define isoresource groups in the intensive care unit. No associations between intensive care unit mortality, reason for admission and intensive and unit treatments and costs of care were found. Based on these results, it seems that casemix descriptors and isoresource groups for the intensive care unit that would allow costs to be predicted cannot be defined in terms of single factors.
On some recent definitions and analysis frameworks for risk, vulnerability, and resilience.
Aven, Terje
2011-04-01
Recently, considerable attention has been paid to a systems-based approach to risk, vulnerability, and resilience analysis. It is argued that risk, vulnerability, and resilience are inherently and fundamentally functions of the states of the system and its environment. Vulnerability is defined as the manifestation of the inherent states of the system that can be subjected to a natural hazard or be exploited to adversely affect that system, whereas resilience is defined as the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time, and composite costs, and risks. Risk, on the other hand, is probability based, defined by the probability and severity of adverse effects (i.e., the consequences). In this article, we look more closely into this approach. It is observed that the key concepts are inconsistent in the sense that the uncertainty (probability) dimension is included for the risk definition but not for vulnerability and resilience. In the article, we question the rationale for this inconsistency. The suggested approach is compared with an alternative framework that provides a logically defined structure for risk, vulnerability, and resilience, where all three concepts are incorporating the uncertainty (probability) dimension. © 2010 Society for Risk Analysis.
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.; ...
2017-08-18
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Tran, Duc; Doan, Nguyen; Louime, Clifford; Giordano, Mario; Portilla, Sixto
2014-01-01
Dunaliella is currently drawing worldwide attention as an alternative source of nutraceuticals. Commercially, β-carotene making up over 10% of Dunaliella biomass is generating the most interest. These compounds, because of their non-toxic properties, have found applications in the food, drug and cosmetic industry. The β-carotene content of Dunaliella cells, however, depends heavily on the growth conditions and especially on the availability of nutrients, salinity, irradiance and temperature in the growth medium. A chemically well defined medium is usually required, which significantly contributes to the cost of pigment production; hence a desire for low cost marine media. The present study aimed at evaluating the suitability of six different media, especially exploiting local potential resources, for the mass production of Dunaliella salina DCCBC15 as functional food and medicine. The efficacy of a new selected low-cost enriched natural seawater medium (MD4), supplemented with industrial N-P-K fertilizer, was investigated with respect to biomass production, chlorophyll, antioxidant capacity, and total carotene by Dunaliella though culture conditions were not optimized yet. This new medium (MD4) appears extremely promising, since it affords a higher production of Dunaliella biomass and pigments compared with the control, a common artificial medium (MD1), while allowing a substantial reduction in the production costs. The medium is also recommended for culturing other marine algae.
Specialty and full-service hospitals: a comparative cost analysis.
Carey, Kathleen; Burgess, James F; Young, Gary J
2008-10-01
To compare the costs of physician-owned cardiac, orthopedic, and surgical single specialty hospitals with those of full-service hospital competitors. The primary data sources are the Medicare Cost Reports for 1998-2004 and hospital inpatient discharge data for three of the states where single specialty hospitals are most prevalent, Texas, California, and Arizona. The latter were obtained from the Texas Department of State Health Services, the California Office of Statewide Health Planning and Development, and the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project. Additional data comes from the American Hospital Association Annual Survey Database. We identified all physician-owned cardiac, orthopedic, and surgical specialty hospitals in these three states as well as all full-service acute care hospitals serving the same market areas, defined using Dartmouth Hospital Referral Regions. We estimated a hospital cost function using stochastic frontier regression analysis, and generated hospital specific inefficiency measures. Application of t-tests of significance compared the inefficiency measures of specialty hospitals with those of full-service hospitals to make general comparisons between these classes of hospitals. Results do not provide evidence that specialty hospitals are more efficient than the full-service hospitals with whom they compete. In particular, orthopedic and surgical specialty hospitals appear to have significantly higher levels of cost inefficiency. Cardiac hospitals, however, do not appear to be different from competitors in this respect. Policymakers should not embrace the assumption that physician-owned specialty hospitals produce patient care more efficiently than their full-service hospital competitors.
NASA Astrophysics Data System (ADS)
Dambreville, Frédéric
2013-10-01
While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.
Containing U.S. health care costs: What bullet to bite?
Jencks, Stephen F.; Schieber, George J.
1992-01-01
In this article, the authors provide an overview of the problem of health care cost containment. Both the growth of health care spending and its underlying causes are discussed. Further, the authors define cost containment, provide a framework for describing cost-containment strategies, and describe the major cost-containment strategies. Finally, the role of research in choosing such a strategy for the United States is examined. PMID:25372928
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
NASA Astrophysics Data System (ADS)
Massmann, Joel; Freeze, R. Allan
1987-02-01
This paper puts in place a risk-cost-benefit analysis for waste management facilities that explicitly recognizes the adversarial relationship that exists in a regulated market economy between the owner/operator of a waste management facility and the government regulatory agency under whose terms the facility must be licensed. The risk-cost-benefit analysis is set up from the perspective of the owner/operator. It can be used directly by the owner/operator to assess alternative design strategies. It can also be used by the regulatory agency to assess alternative regulatory policy, but only in an indirect manner, by examining the response of an owner/operator to the stimuli of various policies. The objective function is couched in terms of a discounted stream of benefits, costs, and risks over an engineering time horizon. Benefits are in the form of revenues for services provided; costs are those of construction and operation of the facility. Risk is defined as the cost associated with the probability of failure, with failure defined as the occurrence of a groundwater contamination event that violates the licensing requirements established for the facility. Failure requires a breach of the containment structure and contaminant migration through the hydrogeological environment to a compliance surface. The probability of failure can be estimated on the basis of reliability theory for the breach of containment and with a Monte-Carlo finite-element simulation for the advective contaminant transport. In the hydrogeological environment the hydraulic conductivity values are defined stochastically. The probability of failure is reduced by the presence of a monitoring network operated by the owner/operator and located between the source and the regulatory compliance surface. The level of reduction in the probability of failure depends on the probability of detection of the monitoring network, which can be calculated from the stochastic contaminant transport simulations. While the framework is quite general, the development in this paper is specifically suited for a landfill in which the primary design feature is one or more synthetic liners in parallel. Contamination is brought about by the release of a single, inorganic nonradioactive species into a saturated, high-permeability, advective, steady state horizontal flow system which can be analyzed with a two-dimensional analysis. It is possible to carry out sensitivity analyses for a wide variety of influences on this system, including landfill size, liner design, hydrogeological parameters, amount of exploration, extent of monitoring network, nature of remedial schemes, economic factors, and regulatory policy.
A study of commuter airline economics
NASA Technical Reports Server (NTRS)
Summerfield, J. R.
1976-01-01
Variables are defined and cost relationships developed that describe the direct and indirect operating costs of commuter airlines. The study focused on costs for new aircraft and new aircraft technology when applied to the commuter airline industry. With proper judgement and selection of input variables, the operating costs model was shown to be capable of providing economic insight into other commuter airline system evaluations.
Fukuda, Takayuki; Takayama, Kazuo; Hirata, Mitsuhi; Liu, Yu-Jung; Yanagihara, Kana; Suga, Mika; Mizuguchi, Hiroyuki; Furue, Miho K
2017-03-15
Limited growth potential, narrow ranges of sources, and difference in variability and functions from batch to batch of primary hepatocytes cause a problem for predicting drug-induced hepatotoxicity during drug development. Human pluripotent stem cell (hPSC)-derived hepatocyte-like cells in vitro are expected as a tool for predicting drug-induced hepatotoxicity. Several studies have already reported efficient methods for differentiating hPSCs into hepatocyte-like cells, however its differentiation process is time-consuming, labor-intensive, cost-intensive, and unstable. In order to solve this problem, expansion culture for hPSC-derived hepatic progenitor cells, including hepatic stem cells and hepatoblasts which can self-renewal and differentiate into hepatocytes should be valuable as a source of hepatocytes. However, the mechanisms of the expansion of hPSC-derived hepatic progenitor cells are not yet fully understood. In this study, to isolate hPSC-derived hepatic progenitor cells, we tried to develop serum-free growth factor defined culture conditions using defined components. Our culture conditions were able to isolate and grow hPSC-derived hepatic progenitor cells which could differentiate into hepatocyte-like cells through hepatoblast-like cells. We have confirmed that the hepatocyte-like cells prepared by our methods were able to increase gene expression of cytochrome P450 enzymes upon encountering rifampicin, phenobarbital, or omeprazole. The isolation and expansion of hPSC-derived hepatic progenitor cells in defined culture conditions should have advantages in terms of detecting accurate effects of exogenous factors on hepatic lineage differentiation, understanding mechanisms underlying self-renewal ability of hepatic progenitor cells, and stably supplying functional hepatic cells. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Ceftazidime dosing in the elderly: economic implications.
Vlasses, P H; Bastion, W A; Behal, R; Sirgo, M A
1993-01-01
This study evaluated the prevalence and resulting costs of ceftazidime dosing in excess of product labeling recommendations in elderly hospitalized patients. Ceftazidime is a beta-lactam antibiotic excreted via glomerular filtration. According to product labeling, ceftazidime dosing can frequently be decreased in the elderly because glomerular filtration declines with age. A multicenter, retrospective utilization audit involving 11 US academic medical centers examined 221 medical records of patients 65 years of age or older receiving ceftazidime (any brand, any indication). The creatinine clearance of each patient was estimated using the Cockcroft-Gault formula. Renal insufficiency, defined as an estimated creatinine clearance of less than 50 mL/min, was present in 111 of the patients (50 percent). Ceftazidime dosing in excess of product labeling recommendations was noted in 75 of those 111 (68 percent). The cost of excess ceftazidime dosing for those 75 patients (i.e., extra drug acquisition, preparation, administration) was $13,822.50. Although the dosage of ceftazidime required in a specific patient is based on many factors, ceftazidime is frequently overdosed in the elderly because renal function is not considered. Ceftazidime dose-adjustment in the elderly, based on the estimated creatinine clearance, can lead to cost savings. In the US, where hospital reimbursement by Medicare is based on diagnosis, institutions can realize direct cost savings.
'Fab-chips': a versatile, fabric-based platform for low-cost, rapid and multiplexed diagnostics.
Bhandari, Paridhi; Narahari, Tanya; Dendukuri, Dhananjaya
2011-08-07
Low cost and scalable manufacture of lab-on-chip devices for applications such as point-of-care testing is an urgent need. Weaving is presented as a unified, scalable and low-cost platform for the manufacture of fabric chips that can be used to perform such testing. Silk yarns with different properties are first selected, treated with the appropriate reagent solutions, dried and handloom-woven in one step into an integrated fabric chip. This platform has the unique advantage of scaling up production using existing and low cost physical infrastructure. We have demonstrated the ability to create pre-defined flow paths in fabric by using wetting and non-wetting silk yarns and a Jacquard attachment in the loom. Further, we show that yarn parameters such as the yarn twist frequency and weaving coverage area may be conveniently used to tune both the wicking rate and the absorptive capacity of the fabric. Yarns optimized for their final function were used to create an integrated fabric chip containing reagent-coated yarns. Strips of this fabric were then used to perform a proof-of-concept immunoassay with sample flow taking place by capillary action and detection being performed by a visual readout. This journal is © The Royal Society of Chemistry 2011
The effectiveness and costs of comprehensive geriatric evaluation and management.
Wieland, Darryl
2003-11-01
Comprehensive geriatric assessment (CGA) is a multidimensional interdisciplinary diagnostic process focused on determining a frail elderly person's medical, psychological, and functional capabilities in order to develop a coordinated and integrated plan for treatment and long-term follow-up. Geriatrics interventions building on CGA are defined from their historical emergence to the present day in a discussion of their complexity, goals and normative components. Through literature review, questions of the effectiveness and costs of these interventions are addressed. Evidence of effectiveness is derived from individual trials and, particularly, recent systematic reviews. While the trial evidence lends support to the proposition that geriatric interventions can be effective, the results have not been uniform. Review of meta-regression studies suggests that much of this outcome variability is related to identifiable program design parameters. In particular, targeting the frail, an interdisciplinary team structure with clinical control of care, and long-term follow-up, tend to be associated with effective programs. Answers to cost-effectiveness questions also vary and are more rare. With some exceptions, existing evidence as exists suggest that geriatrics interventions can be effective without raising total costs of care. Despite the attention given to these questions in recent years, there is still much room for clinical and scientific advance as we move to better understand what CGA interventions do well and in whom.
11 CFR 9004.9 - Net outstanding qualified campaign expenses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... of the necessary winding down costs, as defined under 11 CFR 9004.4(a)(4), submitted in the format... amount submitted as an estimate of necessary winding down costs under paragraph (a)(1)(iii) of this... shall include estimated costs for office space rental, staff salaries, legal expenses, accounting...
11 CFR 9004.9 - Net outstanding qualified campaign expenses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of the necessary winding down costs, as defined under 11 CFR 9004.4(a)(4), submitted in the format... amount submitted as an estimate of necessary winding down costs under paragraph (a)(1)(iii) of this... shall include estimated costs for office space rental, staff salaries, legal expenses, accounting...
11 CFR 9004.9 - Net outstanding qualified campaign expenses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... of the necessary winding down costs, as defined under 11 CFR 9004.4(a)(4), submitted in the format... amount submitted as an estimate of necessary winding down costs under paragraph (a)(1)(iii) of this... shall include estimated costs for office space rental, staff salaries, legal expenses, accounting...
11 CFR 9004.9 - Net outstanding qualified campaign expenses.
Code of Federal Regulations, 2014 CFR
2014-01-01
... of the necessary winding down costs, as defined under 11 CFR 9004.4(a)(4), submitted in the format... amount submitted as an estimate of necessary winding down costs under paragraph (a)(1)(iii) of this... shall include estimated costs for office space rental, staff salaries, legal expenses, accounting...
Vocational Rehabilitation Counseling: Cost/Benefits Ratio.
ERIC Educational Resources Information Center
Gross, Cecily
Demonstrating cost effectiveness for vocational rehabilitative counseling is difficult. Monetary benefits can be assigned to some of the benefits of vocational counseling, such as the benefit of getting a person off unemployment, but not to others, such as improved family relations. It is also difficult to define cost effectiveness because there…
48 CFR 9903.301 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... contract (including options). Deferred compensation. See 9904.415-30. Defined-benefit pension plan. See.... See 9904.403-30. Original complement of low cost equipment. See 9904.404-30. Pay-as-you-go cost method... and maintenance. See 9904.404-30. Reporting costs. See 9904.401-30. Residual value. See 9904.409-30...
48 CFR 9903.301 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... contract (including options). Deferred compensation. See 9904.415-30. Defined-benefit pension plan. See.... See 9904.403-30. Original complement of low cost equipment. See 9904.404-30. Pay-as-you-go cost method... and maintenance. See 9904.404-30. Reporting costs. See 9904.401-30. Residual value. See 9904.409-30...
48 CFR 9903.301 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... contract (including options). Deferred compensation. See 9904.415-30. Defined-benefit pension plan. See.... See 9904.403-30. Original complement of low cost equipment. See 9904.404-30. Pay-as-you-go cost method... and maintenance. See 9904.404-30. Reporting costs. See 9904.401-30. Residual value. See 9904.409-30...
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
He, Huamei; Hoyer, Kirsten; Tao, Hai; Rice, Ronald; Jimenez, Jesus; Tardiff, Jil C; Ingwall, Joanne S
2012-11-01
The thin filament protein troponin T (TnT) is a regulator of sarcomere function. Whole heart energetics and contractile reserve are compromised in transgenic mice bearing missense mutations at R92 within the tropomyosin-binding domain of cTnT, despite being distal to the ATP hydrolysis domain of myosin. These mutations are associated with familial hypertrophic cardiomyopathy (FHC). Here we test the hypothesis that genetically replacing murine αα-MyHC with murine ββ-MyHC in hearts bearing the R92Q cTnT mutation, a particularly lethal FHC-associated mutation, leads to sufficiently large perturbations in sarcomere function to rescue whole heart energetics and decrease the cost of contraction. By comparing R92Q cTnT and R92L cTnT mutant hearts, we also test whether any rescue is mutation-specific. We defined the energetic state of the isolated perfused heart using (31)P-NMR spectroscopy while simultaneously measuring contractile performance at four work states. We found that the cost of increasing contraction in intact mouse hearts with R92Q cTnT depends on the type of myosin present in the thick filament. We also found that the salutary effect of this manoeuvre is mutation-specific, demonstrating the major regulatory role of cTnT on sarcomere function at the whole heart level.
NASA Astrophysics Data System (ADS)
Palestini, C.; Basso, A.; Graziani, L.
2018-05-01
The contribution, considering the use of low-cost photogrammetric detection methodologies and the use of asset Historical-BIM, has as its aim the theme of knowledge and the adaptation of safety in school buildings, a topic brought to attention by the many situations of seismic risk that have interested the central Apennines in Italy. The specific investigation is referred to the Abruzzo region, hit by the recent earthquakes of 2016 and 2009 that have highlighted the vulnerability of the building structures involved in a large seismic crater covering large areas of the territory. The need to consider in advance the performance standards of building components, especially concerning the strategic ways of the functions contained in them, starts here. In this sense, the school buildings have emerged among the types on which to pay attention, a study theme to be promptly considered, considering the functions performed within them and the possible criticality of such constructions, often dated, enlarged or readjusted without appropriate seismic adaptation plans. From here derives the purpose of the research that is directed towards a systematic recognition of the scholastic heritage, deriving from objective and rapid surveys at low cost, taking into consideration the as-built and the different formal and structural aspects that define the architectural organisms to analyse and manage through three-dimensional models that can be interrogated using HBIM connected to databases containing information of a structural and functional nature. In summary, through the implementation of information in the BIM model, it will be possible to query and obtain in real time all the necessary information to optimize, in terms of efficiency, costs, and future maintenance operations.
Analyzing cost-effectiveness of ulnar and median nerve transfers to regain forearm flexion.
Wali, Arvin R; Park, Charlie C; Brown, Justin M; Mandeville, Ross
2017-03-01
OBJECTIVE Peripheral nerve transfers to regain elbow flexion via the ulnar nerve (Oberlin nerve transfer) and median nerves are surgical options that benefit patients. Prior studies have assessed the comparative effectiveness of ulnar and median nerve transfers for upper trunk brachial plexus injury, yet no study has examined the cost-effectiveness of this surgery to improve quality-adjusted life years (QALYs). The authors present a cost-effectiveness model of the Oberlin nerve transfer and median nerve transfer to restore elbow flexion in the adult population with upper brachial plexus injury. METHODS Using a Markov model, the authors simulated ulnar and median nerve transfers and conservative measures in terms of neurological recovery and improvements in quality of life (QOL) for patients with upper brachial plexus injury. Transition probabilities were collected from previous studies that assessed the surgical efficacy of ulnar and median nerve transfers, complication rates associated with comparable surgical interventions, and the natural history of conservative measures. Incremental cost-effectiveness ratios (ICERs), defined as cost in dollars per QALY, were calculated. Incremental cost-effectiveness ratios less than $50,000/QALY were considered cost-effective. One-way and 2-way sensitivity analyses were used to assess parameter uncertainty. Probabilistic sampling was used to assess ranges of outcomes across 100,000 trials. RESULTS The authors' base-case model demonstrated that ulnar and median nerve transfers, with an estimated cost of $5066.19, improved effectiveness by 0.79 QALY over a lifetime compared with conservative management. Without modeling the indirect cost due to loss of income over lifetime associated with elbow function loss, surgical treatment had an ICER of $6453.41/QALY gained. Factoring in the loss of income as indirect cost, surgical treatment had an ICER of -$96,755.42/QALY gained, demonstrating an overall lifetime cost savings due to increased probability of returning to work. One-way sensitivity analysis demonstrated that the model was most sensitive to assumptions about cost of surgery, probability of good surgical outcome, and spontaneous recovery of neurological function with conservative treatment. Two-way sensitivity analysis demonstrated that surgical intervention was cost-effective with an ICER of $18,828.06/QALY even with the authors' most conservative parameters with surgical costs at $50,000 and probability of success of 50% when considering the potential income recovered through returning to work. Probabilistic sampling demonstrated that surgical intervention was cost-effective in 76% of cases at a willingness-to-pay threshold of $50,000/QALY gained. CONCLUSIONS The authors' model demonstrates that ulnar and median nerve transfers for upper brachial plexus injury improves QALY in a cost-effective manner.
Error minimizing algorithms for nearest eighbor classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Hush, Don; Zimmer, G. Beate
2011-01-03
Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. Wemore » use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.« less
Defining quality health care with outcomes assessment while achieving economic value.
Shaw, L J; Miller, D D
2000-02-01
The effectiveness of a procedure is increasingly guided by the evaluation of patient outcomes. Outcomes data is used to develop clinical pathways of care and to define appropriate resource-use levels without sacrificing quality of care. Integration of the economic implications of medical services into an outcome-based guideline allows for the development of disease-management strategies. In cardiovascular medicine, risk reduction is associated with high cost due to the "pay-back" of new technologies and therapies. A major challenge is to define a balance between "high tech" care and cost. This paper devises an outpatient evidence-based guideline using clinical and economic outcomes data for the diagnosis of coronary disease.
UAH/NASA Workshop on Space Science Platform
NASA Technical Reports Server (NTRS)
Wu, S. T. (Editor); Morgan, S. (Editor)
1978-01-01
The scientific user requirements for a space science platform were defined. The potential user benefits, technological implications and cost of space platforms were examined. Cost effectiveness of the platforms' capabilities were also examined.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Solar thermal repowering systems integration. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubberly, L. J.; Gormely, J. E.; McKenzie, A. W.
1979-08-01
This report is a solar repowering integration analysis which defines the balance-of-plant characteristics and costs associated with the solar thermal repowering of existing gas/oil-fired electric generating plants. Solar repowering interface requirements for water/steam and salt or sodium-cooled central receivers are defined for unit sizes ranging from 50 MWe non-reheat to 350 MWe reheat. Finally balance-of-plant cost estimates are presented for each of six combinations of plant type, receiver type and percent solar repowering.
Cost-effectiveness of focused ultrasound, radiosurgery, and DBS for essential tremor.
Ravikumar, Vinod K; Parker, Jonathon J; Hornbeck, Traci S; Santini, Veronica E; Pauly, Kim Butts; Wintermark, Max; Ghanouni, Pejman; Stein, Sherman C; Halpern, Casey H
2017-08-01
Essential tremor remains a very common yet medically refractory condition. A recent phase 3 study demonstrated that magnetic resonance-guided focused ultrasound thalamotomy significantly improved upper limb tremor. The objectives of this study were to assess this novel therapy's cost-effectiveness compared with existing procedural options. Literature searches of magnetic resonance-guided focused ultrasound thalamotomy, DBS, and stereotactic radiosurgery for essential tremor were performed. Pre- and postoperative tremor-related disability scores were collected from 32 studies involving 83 magnetic resonance-guided focused ultrasound thalamotomies, 615 DBSs, and 260 stereotactic radiosurgery cases. Utility, defined as quality of life and derived from percent change in functional disability, was calculated; Medicare reimbursement was employed as a proxy for societal cost. Medicare reimbursement rates are not established for magnetic resonance-guided focused ultrasound thalamotomy for essential tremor; therefore, reimbursements were estimated to be approximately equivalent to stereotactic radiosurgery to assess a cost threshold. A decision analysis model was constructed to examine the most cost-effective option for essential tremor, implementing meta-analytic techniques. Magnetic resonance-guided focused ultrasound thalamotomy resulted in significantly higher utility scores compared with DBS (P < 0.001) or stereotactic radiosurgery (P < 0.001). Projected costs of magnetic resonance-guided focused ultrasound thalamotomy were significantly less than DBS (P < 0.001), but not significantly different from radiosurgery. Magnetic resonance-guided focused ultrasound thalamotomy is cost-effective for tremor compared with DBS and stereotactic radiosurgery and more effective than both. Even if longer follow-up finds changes in effectiveness or costs, focused ultrasound thalamotomy will likely remain competitive with both alternatives. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Hugo, Cherie; Isenring, Elisabeth; Miller, Michelle; Marshall, Skye
2018-05-01
observational studies have shown that nutritional strategies to manage malnutrition may be cost-effective in aged care; but more robust economic data is needed to support and encourage translation to practice. Therefore, the aim of this systematic review is to compare the cost-effectiveness of implementing nutrition interventions targeting malnutrition in aged care homes versus usual care. residential aged care homes. systematic literature review of studies published between January 2000 and August 2017 across 10 electronic databases. Cochrane Risk of Bias tool and GRADE were used to evaluate the quality of the studies. eight included studies (3,098 studies initially screened) reported on 11 intervention groups, evaluating the effect of modifications to dining environment (n = 1), supplements (n = 5) and food-based interventions (n = 5). Interventions had a low cost of implementation (<£2.30/resident/day) and provided clinical improvement for a range of outcomes including weight, nutritional status and dietary intake. Supplements and food-based interventions further demonstrated a low cost per quality adjusted life year or unit of physical function improvement. GRADE assessment revealed the quality of the body of evidence that introducing malnutrition interventions, whether they be environmental, supplements or food-based, are cost-effective in aged care homes was low. this review suggests supplements and food-based nutrition interventions in the aged care setting are clinically effective, have a low cost of implementation and may be cost-effective at improving clinical outcomes associated with malnutrition. More studies using well-defined frameworks for economic analysis, stronger study designs with improved quality, along with validated malnutrition measures are needed to confirm and increase confidence with these findings.
Study of wrap-rib antenna design
NASA Technical Reports Server (NTRS)
Wade, W. D.; Sinha, A.; Singh, R.
1979-01-01
The results of a parametric design study conducted to develop the significant characteristics and technology limitations of space deployable antenna systems with aperture sizes ranging from 50 up to 300 m and F/D ratios between 0.5 and 3.0 are presented. Wrap/rib type reflectors of both the prime and offset fed geometry and associated feed support structures were considered. The significant constraints investigated as limitations on achievable aperture were inherent manufacturability, orbit dynamic and thermal stability, antenna weight, and antenna stowed volume. A data base, resulting in the defined maximum achievable aperture size as a function of diameter, frequency and estimated cost, was formed.
Novel multiform morphologies of hydroxyapatite: Synthesis and growth mechanism
NASA Astrophysics Data System (ADS)
Mary, I. Reeta; Sonia, S.; Viji, S.; Mangalaraj, D.; Viswanathan, C.; Ponpandian, N.
2016-01-01
Morphological evolution of materials becomes a prodigious challenge due to their key role in defining their functional properties and desired applications. Herein, we report the synthesis of hydroxyapatite (HAp) microstructures with multiform morphologies, such as spheres, cubes, hexagonal rods and nested bundles constructed from their respective nanoscale building blocks via a simple cost effective hydro/solvothermal method. A possible formation mechanism of diverse morphologies of HAp has been presented. Structural analysis based on X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy confirms the purity of the HAp microstructures. The multiform morphologies of HAp were corroborated by using Field emission scanning electron microscope (FESEM).
Electrodynamic tether system study
NASA Technical Reports Server (NTRS)
1987-01-01
The purpose of this program is to define an Electrodynamic Tether System (ETS) that could be erected from the space station and/or platforms to function as an energy storage device. A schematic representation of the ETS concept mounted on the space station is presented. In addition to the hardware design and configuration efforts, studies are also documented involving simulations of the Earth's magnetic fields and the effects this has on overall system efficiency calculations. Also discussed are some preliminary computer simulations of orbit perturbations caused by the cyclic/night operations of the ETS. System cost estimates, an outline for future development testing for the ETS system, and conclusions and recommendations are also provided.
The modernisation of general practice in the UK: 1980 to 1995 and beyond. Part I.
Iliffe, S.
1996-01-01
The UK is unusual in providing universal free healthcare in which access to specialists is largely controlled by general practitioners with 24-hour responsibility, throughout the year, for a defined list of patients of all ages. It is generally considered that this gatekeeper function has contributed to the relatively low cost of the National Health Service, but major changes in the organisation and clinical role of general practitioners have occurred, culminating in a new contract that aims to re-orientate general practice towards health promotion, disease prevention and the management of chronic disease. The implications of these changes are discussed. PMID:8733525
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
ICU Telemedicine Program Financial Outcomes.
Lilly, Craig M; Motzkus, Christine; Rincon, Teresa; Cody, Shawn E; Landry, Karen; Irwin, Richard S
2017-02-01
ICU telemedicine improves access to high-quality critical care, has substantial costs, and can change financial outcomes. Detailed information about financial outcomes and their trends over time following ICU telemedicine implementation and after the addition of logistic center function has not been published to our knowledge. Primary data were collected for consecutive adult patients of a single academic medical center. We compared clinical and financial outcomes across three groups that differed regarding telemedicine support: a group without ICU telemedicine support (pre-ICU intervention group), a group with ICU telemedicine support (ICU telemedicine group), and an ICU telemedicine group with added logistic center functions and support for quality-care standardization (logistic center group). The primary outcome was annual direct contribution margin defined as aggregated annual case revenue minus annual case direct costs (including operating costs of ICU telemedicine and its related programs). All monetary values were adjusted to 2015 US dollars using Producer Price Index for Health-Care Facilities. Annual case volume increased from 4,752 (pre-ICU telemedicine) to 5,735 (ICU telemedicine) and 6,581 (logistic center). The annual direct contribution margin improved from $7,921,584 (pre-ICU telemedicine) to $37,668,512 (ICU telemedicine) to $60,586,397 (logistic center) due to increased case volume, higher case revenue relative to direct costs, and shorter length of stay. The ability of properly modified ICU telemedicine programs to increase case volume and access to high-quality critical care with improved annual direct contribution margins suggests that there is a financial argument to encourage the wider adoption of ICU telemedicine. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Pokhvisneva, Irina; Léger, Étienne; Gaudreau, Hélène; Steiner, Meir; Kennedy, James L.; O’Donnell, Kieran J.; Diorio, Josie; Meaney, Michael J.; Silveira, Patrícia P.
2017-01-01
Background Fetal adversity, evidenced by poor fetal growth for instance, is associated with increased risk for several diseases later in life. Classical cut-offs to characterize small (SGA) and large for gestational age (LGA) newborns are used to define long term vulnerability. We aimed at exploring the possible dynamism of different birth weight cut-offs in defining vulnerability in developmental outcomes (through the Bayley Scales of Infant and Toddler Development), using the example of a gene vs. fetal adversity interaction considering gene choices based on functional relevance to the studied outcome. Methods 36-month-old children from an established prospective birth cohort (Maternal Adversity, Vulnerability, and Neurodevelopment) were classified according to birth weight ratio (BWR) (SGA ≤0.85, LGA >1.15, exploring a wide range of other cut-offs) and genotyped for polymorphisms associated with dopamine signaling (TaqIA-A1 allele, DRD2-141C Ins/Ins, DRD4 7-repeat, DAT1-10- repeat, Met/Met-COMT), composing a score based on the described function, in which hypofunctional variants received lower scores. Results There were 251 children (123 girls and 128 boys). Using the classic cut-offs (0.85 and 1.15), there were no statistically significant interactions between the neonatal groups and the dopamine genetic score. However, when changing the cut-offs, it is possible to see ranges of BWR that could be associated with vulnerability to poorer development according to the variation in the dopamine function. Conclusion The classic birth weight cut-offs to define SGA and LGA newborns should be seen with caution, as depending on the outcome in question, the protocols for long-term follow up could be either too inclusive—therefore most costly, or unable to screen true vulnerabilities—and therefore ineffective to establish early interventions and primary prevention. PMID:28505190
Upton, J; Murphy, M; Shalloo, L; Groot Koerkamp, P W G; De Boer, I J M
2014-01-01
Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical data of 1yr on commercial spring calving, grass-based dairy farms with 45, 88, and 195 milking cows; and (3) demonstrated the functionality of the model by applying 2 electricity tariffs to the electricity consumption data and examining the effect on total dairy farm electricity costs. The MECD was developed using a mechanistic modeling approach and required the key inputs of milk production, cow number, and details relating to the milk-cooling system, milking machine system, water-heating system, lighting systems, water pump systems, and the winter housing facilities as well as details relating to the management of the farm (e.g., season of calving). Model validation showed an overall relative prediction error (RPE) of less than 10% for total electricity consumption. More than 87% of the mean square prediction error of total electricity consumption was accounted for by random variation. The RPE values of the milk-cooling systems, water-heating systems, and milking machine systems were less than 20%. The RPE values for automatic scraper systems, lighting systems, and water pump systems varied from 18 to 113%, indicating a poor prediction for these metrics. However, automatic scrapers, lighting, and water pumps made up only 14% of total electricity consumption across all farms, reducing the overall impact of these poor predictions. Demonstration of the model showed that total farm electricity costs increased by between 29 and 38% by moving from a day and night tariff to a flat tariff. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gordon, N S
1994-02-01
The cost of a transurethral resection of the prostate is of considerable concern to the community. More of these procedures are being performed as the number of patients in the aged population increases. The costs of wages and salaries, purchase of equipment and depreciation, stationery, linen, investigations (pathology) and pharmaceuticals are compared with the bed charges (as charged to a private patient), the cost per inpatient day and the cost per inpatient treated, which is calculated from the operating fund budget expenditure of The Bendigo Hospital. The cost per diagnosis related group (DRG) 336 (defined as: transurethral prostatectomy, age greater than 69 and/or complication/co-morbidity; mean length of stay 7.0; relative weight = 0.9869) and DRG 337 (defined as: transurethral prostatectomy, age less than 70 without complication/co-morbidity; mean length of stay 5.8; relative weight = 0.7788) are compared with the figures for a similar procedure in 1987 in a United States hospital and extrapolated, by the use of the Consumer Price Index, to 1992 levels. The findings demonstrate that transurethral resection of the prostate as costed in this hospital compares very favourably with that in a US hospital, and favourably from the point of view of health care costs.
Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.
Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W
2016-01-01
To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.
Research requirements to reduce civil helicopter life cycle cost
NASA Technical Reports Server (NTRS)
Blewitt, S. J.
1978-01-01
The problem of the high cost of helicopter development, production, operation, and maintenance is defined and the cost drivers are identified. Helicopter life cycle costs would decrease by about 17 percent if currently available technology were applied. With advanced technology, a reduction of about 30 percent in helicopter life cycle costs is projected. Technological and managerial deficiencies which contribute to high costs are examined, basic research and development projects which can reduce costs include methods for reduced fuel consumption; improved turbine engines; airframe and engine production methods; safety; rotor systems; and advanced transmission systems.
NASA Astrophysics Data System (ADS)
Chaizy, P. A.; Dimbylow, T. G.; Allan, P. M.; Hapgood, M. A.
2011-09-01
This paper is one of the components of a larger framework of activities whose purpose is to improve the performance and productivity of space mission systems, i.e. to increase both what can be achieved and the cost effectiveness of this achievement. Some of these activities introduced the concept of Functional Architecture Module (FAM); FAMs are basic blocks used to build the functional architecture of Plan Management Systems (PMS). They also highlighted the need to involve Science Operations Planning Expertise (SOPE) during the Mission Design Phase (MDP) in order to design and implement efficiently operation planning systems. We define SOPE as the expertise held by people who have both theoretical and practical experience in operations planning, in general, and in space science operations planning in particular. Using ESA's methodology for studying and selecting science missions we also define the MDP as the combination of the Mission Assessment and Mission Definition Phases. However, there is no generic procedure on how to use FAMs efficiently and systematically, for each new mission, in order to analyse the cost and feasibility of new missions as well as to optimise the functional design of new PMS; the purpose of such a procedure is to build more rapidly and cheaply such PMS as well as to make the latter more reliable and cheaper to run. This is why the purpose of this paper is to provide an embryo of such a generic procedure and to show that the latter needs to be applied by people with SOPE during the MDP. The procedure described here proposes some initial guidelines to identify both the various possible high level functional scenarii, for a given set of possible requirements, and the information that needs to be associated with each scenario. It also introduces the concept of catalogue of generic functional scenarii of PMS for space science missions. The information associated with each catalogued scenarii will have been identified by the above procedure and will be relevant only for some specific mission requirements. In other words, each mission that shares the same type of requirements that lead to a list of specific catalogued scenarii can use this latter list of scenarii (regardless of whether the mission is a plasma, planetary, astronomy, etc. mission). The main advantages of such a catalogue are that it speeds-up the execution of the procedure and makes the latter more reliable. Ultimately, the information associated to each relevant scenario (from the catalogue or freshly generated by the procedure) will then be used by mission designers to make informed decisions, including the modification of the mission requirements, for any missions. In addition, to illustrate the use of such a procedure, the latter is applied to a case study, i.e. the Cross-Scale mission. One of the outcomes of this study is an initial set of generic functional scenarii. Finally, although border line with the above purpose of this paper, we also discuss multi-spacecraft specific issues and issues related to the on-board execution of the plan update system (PUS). In particular, we show that the operation planning cost of N spacecraft is not equal to N times the cost of 1 spacecraft and that on-board non-synchronised operation will not require inter-spacecraft communication. We also believe that on-board PUS should be made possible for all missions as a standard.
ERIC Educational Resources Information Center
Reeder, Brian
2004-01-01
Standard & Poors (S&P) uses a measure they call a Performance Cost Index (PCI) as their measure of a school or district?s ?Return on Resources?. The Performance Cost Index is defined as the average cost per measured ?unit? of student performance. In its simplest form, the Performance Cost Index is calculated as per student expenditures divided by…
Code of Federal Regulations, 2013 CFR
2013-07-01
... appropriate en route travel time. [FTR Amdt. 70, 63 FR 15971, Apr. 1, 1998. Redesignated by FTR Amdt. 108, 67... cost and constructive cost when an employee interrupts a travel assignment because of an incapacitating illness or injury? 301-70.506 Section 301-70.506 Public Contracts and Property Management Federal Travel...
Code of Federal Regulations, 2012 CFR
2012-07-01
... appropriate en route travel time. [FTR Amdt. 70, 63 FR 15971, Apr. 1, 1998. Redesignated by FTR Amdt. 108, 67... cost and constructive cost when an employee interrupts a travel assignment because of an incapacitating illness or injury? 301-70.506 Section 301-70.506 Public Contracts and Property Management Federal Travel...
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate en route travel time. [FTR Amdt. 70, 63 FR 15971, Apr. 1, 1998. Redesignated by FTR Amdt. 108, 67... cost and constructive cost when an employee interrupts a travel assignment because of an incapacitating illness or injury? 301-70.506 Section 301-70.506 Public Contracts and Property Management Federal Travel...
Code of Federal Regulations, 2014 CFR
2014-07-01
... appropriate en route travel time. [FTR Amdt. 70, 63 FR 15971, Apr. 1, 1998. Redesignated by FTR Amdt. 108, 67... cost and constructive cost when an employee interrupts a travel assignment because of an incapacitating illness or injury? 301-70.506 Section 301-70.506 Public Contracts and Property Management Federal Travel...
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate en route travel time. [FTR Amdt. 70, 63 FR 15971, Apr. 1, 1998. Redesignated by FTR Amdt. 108, 67... cost and constructive cost when an employee interrupts a travel assignment because of an incapacitating illness or injury? 301-70.506 Section 301-70.506 Public Contracts and Property Management Federal Travel...
Space construction system analysis. Part 2: Cost and programmatics
NASA Technical Reports Server (NTRS)
Vonflue, F. W.; Cooper, W.
1980-01-01
Cost and programmatic elements of the space construction systems analysis study are discussed. The programmatic aspects of the ETVP program define a comprehensive plan for the development of a space platform, the construction system, and the space shuttle operations/logistics requirements. The cost analysis identified significant items of cost on ETVP development, ground, and flight segments, and detailed the items of space construction equipment and operations.
Kloss, S; Müller, U; Oelschläger, H
2005-09-01
Facilities for the manufacturing of pharmaceutical drug substances on the pilot-plant and the industrial scale as well as chemical reactors and vessels used for chemical work-up mainly consist of alloyed stainless steel. The influence of the alloy composition and the surface condition, i.e. of the roughness of the stainless-steel materials, on the adsorption of structurally diverse steroidal substances and, hence, on the quality of the products was studied. In general, stainless-steel alloys with smooth, not so rough surfaces are to be favored as reactor material. However, it was demonstrated in this study that, on account of the weak interaction between active substances and steel materials, mechanically polished materials of a medium roughness up to approx. 0.4 microm can be employed instead of the considerably more cost-intensive electrochemically polished stainless-steel surfaces. The type of surface finishing up to a defined roughness, then, has no influence on the quality of these pharmaceutical products. Substances that, because of their molecular structure, can function as "anions" in the presence of polar solvents, are adsorbed on very smooth surfaces prepared by electrochemical methods, forming an amorphous surface film. For substances with this structural characteristics, the lower-cost mechanically polished reactor materials of a medium roughness up to approx. 0.5 microm should be used exclusively.
Estimating the cost of epilepsy in Europe: a review with economic modeling.
Pugliatti, Maura; Beghi, Ettore; Forsgren, Lars; Ekman, Mattias; Sobocki, Patrik
2007-12-01
Based on available epidemiologic, health economic, and international population statistics literature, the cost of epilepsy in Europe was estimated. Europe was defined as the 25 European Union member countries, Iceland, Norway, and Switzerland. Guidelines for epidemiological studies on epilepsy were used for a case definition. A bottom-up prevalence-based cost-of-illness approach, the societal perspective for including the cost items, and the human capital approach as valuation principle for indirect costs were used. The cost estimates were based on selected studies with common methodology and valuation principles. The estimated prevalence of epilepsy in Europe in 2004 was 4.3-7.8 per 1,000. The estimated total cost of the disease in Europe was euro15.5 billion in 2004, indirect cost being the single most dominant cost category (euro8.6 billion). Direct health care costs were euro2.8 billion, outpatient care comprising the largest part (euro1.3 billion). Direct nonmedical cost was euro4.2 billion. That of antiepileptic drugs was euro400 million. The total cost per case was euro2,000-11,500 and the estimated cost per European inhabitant was euro33. Epilepsy is a relevant socioeconomic burden at individual, family, health services, and societal level in Europe. The greater proportion of such burden is outside the formal health care sector, antiepileptic drugs representing a smaller proportion. Lack of economic data from several European countries and other methodological limitations make this report an initial estimate of the cost of epilepsy in Europe. Prospective incidence cost-of-illness studies from well-defined populations and common methodology are encouraged.
Stahl, Christopher C; Wima, Koffi; Hanseman, Dennis J; Hoehn, Richard S; Ertel, Audrey; Midura, Emily F; Hohmann, Samuel F; Paquette, Ian M; Shah, Shimul A; Abbott, Daniel E
2015-12-01
The desire to provide cost-effective care has lead to an investigation of the costs of therapy for end-stage renal disease. Organ quality metrics are one way to attempt to stratify kidney transplants, although the ability of these metrics to predict costs and resource use is undetermined. The Scientific Registry of Transplant Recipients database was linked to the University HealthSystem Consortium Database to identify adult deceased donor kidney transplant recipients from 2009 to 2012. Patients were divided into cohorts by kidney criteria (standard vs expanded) or kidney donor profile index (KDPI) score (<85 vs 85+). Length of stay, 30-day readmission, discharge disposition, and delayed graft function were used as indicators of resource use. Cost was defined as reimbursement based on Medicare cost/charge ratios and included the costs of readmission when applicable. More than 19,500 patients populated the final dataset. Lower-quality kidneys (expanded criteria donor or KDPI 85+) were more likely to be transplanted in older (both P < .001) and diabetic recipients (both P < .001). After multivariable analysis controlling for recipient characteristics, we found that expanded criteria donor transplants were not associated with increased costs compared with standard criteria donor transplants (risk ratio [RR] 0.97, 95% confidence interval [CI] 0.93-1.00, P = .07). KDPI 85+ was associated with slightly lower costs than KDPI <85 transplants (RR 0.95, 95% CI 0.91-0.99, P = .02). When KDPI was considered as a continuous variable, the association was maintained (RR 0.9993, 95% CI 0.999-0.9998, P = .01). Organ quality metrics are less influential predictors of short-term costs than recipient factors. Future studies should focus on recipient characteristics as a way to discern high versus low cost transplantation procedures. Copyright © 2015 Elsevier Inc. All rights reserved.
32 CFR 3.8 - DoD access to records policy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for defined payable milestones, with no provision for financial or cost reporting that would be a... necessary to verify statutory cost share or to verify amounts generated from financial or cost records that... General access. (1) Fixed-price type OT agreements. (i) General—DoD access to records is not generally...
32 CFR 3.8 - DoD access to records policy.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for defined payable milestones, with no provision for financial or cost reporting that would be a... necessary to verify statutory cost share or to verify amounts generated from financial or cost records that... General access. (1) Fixed-price type OT agreements. (i) General—DoD access to records is not generally...
48 CFR 15.406-2 - Certificate of current cost or pricing data.
Code of Federal Regulations, 2011 CFR
2011-10-01
... knowledge and belief, the cost or pricing data (as defined in section 2.101 of the Federal Acquisition... a representation as to the accuracy of the contractor's judgment on the estimate f future costs or... current data, the contractor's responsibility is not limited by any lack of personal knowledge of the...
48 CFR 15.406-2 - Certificate of current cost or pricing data.
Code of Federal Regulations, 2012 CFR
2012-10-01
... knowledge and belief, the cost or pricing data (as defined in section 2.101 of the Federal Acquisition... a representation as to the accuracy of the contractor's judgment on the estimate f future costs or... current data, the contractor's responsibility is not limited by any lack of personal knowledge of the...
42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).
Code of Federal Regulations, 2011 CFR
2011-10-01
... Payments for Outlier Cases, Special Treatment Payment for New Technology, and Payment Adjustment for... circumstances: (i) New hospitals that have not yet submitted their first Medicare cost report. (For this purpose, a new hospital is defined as an entity that has not accepted assignment of an existing hospital's...
2007-03-01
of the project, and the Weighted Average Cost of Capital ( WACC ). WACC is defined as the after-tax marginal cost of capital (Copeland & Antikarov...Initial Investment t = Life Expectancy of Project (Start =1, to Finish=N) E(FCF) = Expected Free-Cash Flow WACC = Weighted Average Cost of
EFAB Report: Green Infrastructure Operations and Maintenance Finance
In this report, EFAB defines green infrastructure, outlines the benefits of green infrastructure, introduces green infrastructure operations and maintenance costs, and identifies and evaluates diverse ways to fund/finance green infrastructure O&M costs.
42 CFR 447.64 - Alternative premiums, enrollment fees, or similar fees: State plan requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS PAYMENTS FOR... cost sharing under Medicaid, defined at § 447.78, track beneficiaries' incurred premiums and cost...
Code of Federal Regulations, 2010 CFR
2010-10-01
... FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL... defined at § 447.78, track beneficiaries' incurred premiums and cost sharing through a mechanism developed...
42 CFR 447.64 - Alternative premiums, enrollment fees, or similar fees: State plan requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS PAYMENTS FOR... cost sharing under Medicaid, defined at § 447.78, track beneficiaries' incurred premiums and cost...
Code of Federal Regulations, 2011 CFR
2011-10-01
... FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL... defined at § 447.78, track beneficiaries' incurred premiums and cost sharing through a mechanism developed...
42 CFR 417.454 - Charges to Medicare enrollees.
Code of Federal Regulations, 2011 CFR
2011-10-01
... § 410.152(l)). (e) Services for which cost sharing may not exceed cost sharing under original Medicare... nursing care defined as services provided during a covered stay in a skilled nursing facility during the...
Context and competition in the capture of visual attention.
Hickey, Clayton; Theeuwes, Jan
2011-10-01
Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.
Study of utilization of advanced composites in fuselage structures of large transports
NASA Technical Reports Server (NTRS)
Jackson, A. C.; Campion, M. C.; Pei, G.
1984-01-01
The effort required by the transport aircraft manufacturers to support the introduction of advanced composite materials into the fuselage structure of future commercial and military transport aircraft is investigated. Technology issues, potential benefits to military life cycle costs and commercial operating costs, and development plans are examined. The most urgent technology issues defined are impact dynamics, acoustic transmission, pressure containment and damage tolerance, post-buckling, cutouts, and joints and splices. A technology demonstration program is defined and a rough cost and schedule identified. The fabrication and test of a full-scale fuselage barrel section is presented. Commercial and military benefits are identified. Fuselage structure weight savings from use of advanced composites are 16.4 percent for the commercial and 21.8 percent for the military. For the all-composite airplanes the savings are 26 percent and 29 percent, respectively. Commercial/operating costs are reduced by 5 percent for the all-composite airplane and military life cycle costs by 10 percent.
NASA Astrophysics Data System (ADS)
Richardson, M.; Kumar, P.
2016-12-01
The critical zone (CZ) includes the biophysical processes occurring from the top of the vegetation canopy to the weathering zone below the groundwater table. CZ services provide a measure for the goods and benefits derived from CZ processes. In intensively managed landscapes (IML), the provisioning, supporting, and regulating services are altered through anthropogenic energy inputs to derive more productivity, as agricultural products, from these landscapes than would be possible under natural conditions. However, the energy or cost equivalents of alterations to CZ functions within landscape profiles are unknown. The valuation of CZ services in energy or monetary terms provides a more concrete tool for characterizing seemingly abstract environmental damages from agricultural production systems. A multi-layer canopy-root-soil model is combined with nutrient and water flux models to simulate the movement of nutrients throughout the soil system. This data enables the measurement of agricultural anthropogenic impacts to the CZ's nutrient cycling supporting services and atmospheric stabilizing regulating services defined by the flux of carbon and nutrients. Such measurements include soil carbon storage, soil carbon respiration, nitrate leaching, and nitrous oxide flux into the atmosphere. Additionally, the socioeconomic values of corn feed and ethanol define the primary productivity supporting services of each crop use.In the debate between feed production and corn-based ethanol production, measured nutrient CZ services can cost up to four times more than traditionally estimated CO2 equivalences for the entire bioenergy production system. Energy efficiency in addition to environmental impacts demonstrate how the inclusion of CZ services is necessary in accounting for the entire life cycle of agricultural production systems. These results conclude that feed production systems are more energy efficient and less environmentally costly than corn-based ethanol systems.
Vasilyev, K N
2013-01-01
When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.
What Do Cost Functions Tell Us about the Cost of an Adequate Education?
ERIC Educational Resources Information Center
Costrell, Robert M.; Hanushek, Eric; Loeb, Susanna
2008-01-01
Econometric cost functions have begun to appear in education adequacy cases with greater frequency. Cost functions are superficially attractive because they give the impression of objectivity, holding out the promise of scientifically estimating the cost of achieving specified levels of performance from actual data on spending. By contrast, the…
Yield and cost of individual common diagnostic tests in new primary care outpatients in Japan.
Takemura, Yuzuru; Ishida, Haku; Inoue, Yuji; Beck, J Robert
2002-01-01
Appropriate diagnostic testing involves considerations of cost-effectiveness. We examined the cost-effectiveness of individual tests in a panel of tests defined by the Japan Society of Clinical Pathology. We studied 540 new, symptomatic primary care outpatients with a set of 30 common diagnostic tests [the Essential Laboratory Tests (2); ELT(2) panel] for clinical evaluation and identification of occult disease. A useful result (UR) of testing was defined as a finding that contributed to a change in a physician's diagnosis or decision-making relating to a "tentative initial diagnosis" obtained from history and physical examination alone. The ELT(2) panel testing yielded 398 URs and uncovered 261 occult diseases among 540 patients. In total, 1592 tests contributed to either UR-generation or discovery of occult disease. The cost per effective test (cost required per test that contributed to either definition of effectiveness) ranged from 108 yen (approximately 0.92 US dollars) for total cholesterol to 6200 yen (approximately 52.50 dollars) for chest x-ray. Contribution rates and the cost per effective test varied among disease categories. We restructured panel components considering the effectiveness of each test. Subsets of the ELT(2) would have improved cost-effectiveness and achieved cost savings in five of eight disease categories. Assembly of tests based on cost-effectiveness can improve clinical efficiency and decrease total cost of panel testing for selected patient groups.
Productivity costs decrease after endoscopic sinus surgery for refractory chronic rhinosinusitis.
Rudmik, Luke; Smith, Timothy L; Mace, Jess C; Schlosser, Rodney J; Hwang, Peter H; Soler, Zachary M
2016-03-01
The primary objective of this pilot study was to define the change in productivity costs following endoscopic sinus surgery (ESS) for chronic rhinosinusitis (CRS). Secondary objectives were to identify CRS-related characteristics that may influence the degree of productivity improvement after ESS. Prospective, multi-institutional, observational cohort study. The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time were quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 US Census and the 2013 US Department of Labor statistics. Twenty-seven patients with refractory CRS who underwent ESS were followed for a mean of 15 months (range, 8-25 months). Following ESS, there were improvements in annual absenteeism (22 days reduced to 3 days), annual presenteeism (41 days reduced to 19 days), and annual household days lost (12 days reduced to 6 days). Overall, the preoperative productivity costs were reduced after ESS ($9,190 vs. $3,373, respectively; P < .001). Daily productivity is negatively impacted by the presence of CRS. The outcomes from this study provide the first insights into the reduced productivity costs associated with receiving ESS for refractory CRS. Future studies with larger sample sizes will need to validate the results from this pilot study. 2c Laryngoscope, 126:570-574, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Productivity Costs Decrease After Endoscopic Sinus Surgery for Refractory Chronic Rhinosinusitis
Rudmik, Luke; Smith, Timothy L.; Mace, Jess C.; Schlosser, Rodney J.; Hwang, Peter H.; Soler, Zachary M.
2015-01-01
Objective The primary objective of this pilot study was to define the change in productivity costs following endoscopic sinus surgery (ESS) for chronic rhinosinusitis (CRS). Secondary objectives were to identify CRS-related characteristics that may influence the degree of productivity improvement after ESS. Study Design Prospective, multi-institutional, observational cohort study. Methods The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time was quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 US National Census and the 2013 US Department of Labor statistics. Results 27 patients with refractory CRS who underwent ESS were followed for a mean of 15 [SD 4.0] months (range: 8 – 25 months). Following ESS, there were improvements in annual absenteeism (22 days reduced to 3 days), annual presenteeism (41 days reduced to 19 days), and annual household days lost (12 days reduced to 6 days). Overall, the preoperative productivity costs were reduced after ESS, $9,097 vs. $3,301, respectively (p<0.001). Conclusion Daily productivity is negatively impacted by the presence of CRS. The outcomes from this study provide the first insights into the reduced productivity costs associated with receiving ESS for refractory CRS. Future studies with larger sample sizes will need to validate the results from this pilot study. PMID:26371457
The Affordable Care Act and health insurance exchanges: effects on the pediatric dental benefit.
Orynich, C Ashley; Casamassimo, Paul S; Seale, N Sue; Reggiardo, Paul; Litch, C Scott
2015-01-01
To examine the relationship between state health insurance Exchange selection and pediatric dental benefit design, regulation and cost. Medical and dental plans were analyzed across three types of state health insurance Exchanges: State-based (SB), State-partnered (SP), and Federally-facilitated (FF). Cost-analysis was completed for 10,427 insurance plans, and health policy expert interviews were conducted. One-way ANOVA compared the cost-sharing structure of stand-alone dental plans (SADP). T-test statistics compared differences in average total monthly pediatric premium costs. No causal relationships were identified between Exchange selection and the pediatric dental benefit's design, regulation or cost. Pediatric medical and dental coverage offered through the embedded plan design exhibited comparable average total monthly premium costs to aggregate cost estimates for the separately purchased SADP and traditional medical plan (P=0.11). Plan designs and regulatory policies demonstrated greater correlation between the SP and FF Exchanges, as compared to the SB Exchange. Parameters defining the pediatric dental benefit are complex and vary across states. Each state Exchange was subject to barriers in improving the quality of the pediatric dental benefit due to a lack of defined, standardized policy parameters and further legislative maturation is required.
Subacute and non-acute casemix in Australia.
Lee, L A; Eagar, K M; Smith, M C
1998-10-19
The costs of subacute care (palliative care, rehabilitation medicine, psychogeriatrics, and geriatric evaluation and management) and non-acute care (nursing home, convalescent and planned respite care) are not adequately described by existing casemix classifications. The predominant treatment goals in subacute care are enhancement of quality of life and/or improvement in functional status and, in non-acute care, maintenance of current health and functional status. A national classification system for this area has now been developed--the Australian National Sub-Acute and Non-Acute Patient Classification System (AN-SNAP). The AN-SNAP system, based on analysis of over 30,000 episodes of care, defines four case types of subacute care (palliative care, rehabilitation, psychogeriatric care, and geriatric evaluation and management and one case type of non-acute care (maintenance care), and classifies both overnight and ambulatory care. The AN-SNAP system reflects the goal of management--a change in functional status or improvement in quality of life--rather than the patient's diagnosis. It will complement the existing AN-DRG classification.
Civilian Radioactive Waste Management System Requirements Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
C.A. Kouts
2006-05-10
The CRD addresses the requirements of Department of Energy (DOE) Order 413.3-Change 1, ''Program and Project Management for the Acquisition of Capital Assets'', by providing the Secretarial Acquisition Executive (Level 0) scope baseline and the Program-level (Level 1) technical baseline. The Secretarial Acquisition Executive approves the Office of Civilian Radioactive Waste Management's (OCRWM) critical decisions and changes against the Level 0 baseline; and in turn, the OCRWM Director approves all changes against the Level 1 baseline. This baseline establishes the top-level technical scope of the CRMWS and its three system elements, as described in section 1.3.2. The organizations responsible formore » design, development, and operation of system elements described in this document must therefore prepare subordinate project-level documents that are consistent with the CRD. Changes to requirements will be managed in accordance with established change and configuration control procedures. The CRD establishes requirements for the design, development, and operation of the CRWMS. It specifically addresses the top-level governing laws and regulations (e.g., ''Nuclear Waste Policy Act'' (NWPA), 10 Code of Federal Regulations (CFR) Part 63, 10 CFR Part 71, etc.) along with specific policy, performance requirements, interface requirements, and system architecture. The CRD shall be used as a vehicle to incorporate specific changes in technical scope or performance requirements that may have significant program implications. Such may include changes to the program mission, changes to operational capability, and high visibility stakeholder issues. The CRD uses a systems approach to: (1) identify key functions that the CRWMS must perform, (2) allocate top-level requirements derived from statutory, regulatory, and programmatic sources, and (3) define the basic elements of the system architecture and operational concept. Project-level documents address CRD requirements by further defining system element functions, decomposing requirements into significantly greater detail, and developing designs of system components, facilities, and equipment. The CRD addresses the identification and control of functional, physical, and operational boundaries between and within CRWMS elements. The CRD establishes requirements regarding key interfaces between the CRWMS and elements external to the CRWMS. Project elements define interfaces between CRWMS program elements. The Program has developed a change management process consistent with DOE Order 413.3-Change 1. Changes to the Secretarial Acquisition Executive and Program-level baselines must be approved by a Program Baseline Change Control Board. Specific thresholds have been established for identifying technical, cost, and schedule changes that require approval. The CRWMS continually evaluates system design and operational concepts to optimize performance and/or cost. The Program has developed systems analysis tools to assess potential enhancements to the physical system and to determine the impacts from cost saving initiatives, scientific and technological improvements, and engineering developments. The results of systems analyses, if appropriate, are factored into revisions to the CRD as revised Programmatic Requirements.« less
Developing a standardized healthcare cost data warehouse.
Visscher, Sue L; Naessens, James M; Yawn, Barbara P; Reinalda, Megan S; Anderson, Stephanie S; Borah, Bijan J
2017-06-12
Research addressing value in healthcare requires a measure of cost. While there are many sources and types of cost data, each has strengths and weaknesses. Many researchers appear to create study-specific cost datasets, but the explanations of their costing methodologies are not always clear, causing their results to be difficult to interpret. Our solution, described in this paper, was to use widely accepted costing methodologies to create a service-level, standardized healthcare cost data warehouse from an institutional perspective that includes all professional and hospital-billed services for our patients. The warehouse is based on a National Institutes of Research-funded research infrastructure containing the linked health records and medical care administrative data of two healthcare providers and their affiliated hospitals. Since all patients are identified in the data warehouse, their costs can be linked to other systems and databases, such as electronic health records, tumor registries, and disease or treatment registries. We describe the two institutions' administrative source data; the reference files, which include Medicare fee schedules and cost reports; the process of creating standardized costs; and the warehouse structure. The costing algorithm can create inflation-adjusted standardized costs at the service line level for defined study cohorts on request. The resulting standardized costs contained in the data warehouse can be used to create detailed, bottom-up analyses of professional and facility costs of procedures, medical conditions, and patient care cycles without revealing business-sensitive information. After its creation, a standardized cost data warehouse is relatively easy to maintain and can be expanded to include data from other providers. Individual investigators who may not have sufficient knowledge about administrative data do not have to try to create their own standardized costs on a project-by-project basis because our data warehouse generates standardized costs for defined cohorts upon request.
Mathematical model of highways network optimization
NASA Astrophysics Data System (ADS)
Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.
2017-12-01
The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.
A Low Cost Sensor Controller for Health Monitoring
NASA Astrophysics Data System (ADS)
Birbas, M.; Petrellis, N.; Gioulekas, F.
2015-09-01
Aging population can benefit from health care systems that allow their health and daily life to be monitored by expert medical staff. Blood pressure, temperature measurements or more advanced tests like Electrocardiograms (ECG) can be ordered through such a healthcare system while urgent situations can be detected and alleviated on time. The results of these tests can be stored with security in a remote cloud or database. Such systems are often used to monitor non-life threatening patient health problems and their advantage in lowering the cost of the healthcare services is obvious. A low cost commercial medical sensor kit has been used in the present work, trying to improve the accuracy and stability of the sensor measurements, the power consumption, etc. This Sensor Controller communicates with a Gateway installed in the patient's residence and a tablet or smart phone used for giving instructions to the patient through a comprehensive user interface. A flexible communication protocol has been defined supporting any short or long term sensor sampling scenario. The experimental results show that it is possible to achieve low power consumption by applying apropriate sleep intervals to the Sensor Controller and by deactivating periodically some of its functionality.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
Edge grouping combining boundary and region information.
Stahl, Joachim S; Wang, Song
2007-10-01
This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.
Evaluation of Risk Management Strategies for a Low-Cost, High-Risk Project
NASA Technical Reports Server (NTRS)
Shishko, Robert; Jorgensen, Edward J.
1996-01-01
This paper summarizes work in progress to define and implement a risk management process tailored to a low-cost, high-risk, NASA mission -the Microrover Flight Experiment (MFEX, commonly called the Mars microrover).
7 CFR 250.13 - Distribution and control of donated foods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... provided by USDA on commodity survey memoranda. The USDA commodity file cost shall be defined as the cost... agency to bona fide experimental or testing agencies, or for use in workshops, or for demonstrations or...
11 CFR 300.33 - Allocation of costs of Federal election activity.
Code of Federal Regulations, 2011 CFR
2011-01-01
... activities described in paragraph (a) of this section to their Federal funds. (c) Costs of public communications. Expenditures for public communications as defined in 11 CFR 100.26 by State, district, and local...
11 CFR 300.33 - Allocation of costs of Federal election activity.
Code of Federal Regulations, 2014 CFR
2014-01-01
... activities described in paragraph (a) of this section to their Federal funds. (c) Costs of public communications. Expenditures for public communications as defined in 11 CFR 100.26 by State, district, and local...
11 CFR 300.33 - Allocation of costs of Federal election activity.
Code of Federal Regulations, 2012 CFR
2012-01-01
... activities described in paragraph (a) of this section to their Federal funds. (c) Costs of public communications. Expenditures for public communications as defined in 11 CFR 100.26 by State, district, and local...
11 CFR 300.33 - Allocation of costs of Federal election activity.
Code of Federal Regulations, 2010 CFR
2010-01-01
... activities described in paragraph (a) of this section to their Federal funds. (c) Costs of public communications. Expenditures for public communications as defined in 11 CFR 100.26 by State, district, and local...
11 CFR 300.33 - Allocation of costs of Federal election activity.
Code of Federal Regulations, 2013 CFR
2013-01-01
... activities described in paragraph (a) of this section to their Federal funds. (c) Costs of public communications. Expenditures for public communications as defined in 11 CFR 100.26 by State, district, and local...
Approaches to defining deltaic sustainability in the 21st century
NASA Astrophysics Data System (ADS)
Day, John W.; Agboola, Julius; Chen, Zhongyuan; D'Elia, Christopher; Forbes, Donald L.; Giosan, Liviu; Kemp, Paul; Kuenzer, Claudia; Lane, Robert R.; Ramachandran, Ramesh; Syvitski, James; Yañez-Arancibia, Alejandro
2016-12-01
Deltas are among the most productive and economically important of global ecosystems but unfortunately they are also among the most threatened by human activities. Here we discuss deltas and human impact, several approaches to defining deltaic sustainability and present a ranking of sustainability. Delta sustainability must be considered within the context of global biophysical and socioeconomic constraints that include thermodynamic limitations, scale and embeddedness, and constraints at the level of the biosphere/geosphere. The development, functioning, and sustainability of deltas are the result of external and internal inputs of energy and materials, such as sediments and nutrients, that include delta lobe development, channel switching, crevasse formation, river floods, storms and associated waves and storm surges, and tides and other ocean currents. Modern deltas developed over the past several thousand years with relatively stable global mean sea level, predictable material inputs from drainage basins and the sea, and as extremely open systems. Human activity has changed these conditions to make deltas less sustainable, in that they are unable to persist through time structurally or functionally. Deltaic sustainability can be considered from geomorphic, ecological, and economic perspectives, with functional processes at these three levels being highly interactive. Changes in this functioning can lead to either enhanced or diminished sustainability, but most changes have been detrimental. There is a growing understanding that the trajectories of global environmental change and cost of energy will make achieving delta sustainability more challenging and limit options for management. Several delta types are identified in terms of sustainability including those in arid regions, those with high and low energy-intensive management systems, deltas below sea level, tropical deltas, and Arctic deltas. Representative deltas are ranked on a sustainability range. Success in sustainable delta management will depend on utilizing natural delta functioning and an ecological engineering approach.
ERIC Educational Resources Information Center
Comptroller General of the U.S., Washington, DC.
A review by the General Accounting Office of various aspects of indirect costs associated with federal health research grants is presented. After an introduction detailing the scope of the review and defining indirect costs and federal participation, the report focuses on the causes of the rapid increase of indirect costs. Among findings was that…
Department of the Navy Acquisition and Capabilities Guidebook
2012-05-01
Cost Estimates/Service Cost Position..................................... 5-1 5.1.2 Cost Analysis Requirements Description ( CARD ) 5-2 5.1.3...Description ( CARD ). 7. Satisfactory review of program health. 8. Concurrence with draft TDS, TES, and SEP. 9. Approval of full funding...Description ( CARD ) SECNAV M-5000.2 May 2012 5-3 Enclosure (1) A sound cost estimate is based on a well-defined program. The CARD is used
Optimum Repair Level Analysis (ORLA) for the Space Transportation System (STS)
NASA Technical Reports Server (NTRS)
Henry, W. R.
1979-01-01
A repair level analysis method applied to a space shuttle scenario is presented. A determination of the most cost effective level of repair for reparable hardware, the location for the repair, and a system which will accrue minimum total support costs within operational and technical constraints over the system design are defined. The method includes cost equations for comparison of selected costs to completion for assumed repair alternates.
The cost-effectiveness of life-saving interventions in Japan. Do chemical regulations cost too much?
Kishimoto, Atsuo; Oka, Tosihiro; Nakanishi, Junko
2003-10-01
This paper compares the cost-effectiveness of life-saving interventions in Japan, based on information collected from the health, safety and environmental literature. More than 50 life-saving interventions are analyzed. Cost-effectiveness is defined as the cost per life-year saved or as the cost per quality-adjusted life-year saved. Finding a large cost-effectiveness disparity between chemical controls and health care intervention, we raise the question of whether chemical regulations cost society too much. We point out the limitations of this study and propose a way to improve the incorporation of morbidity effects in cost-effectiveness analysis.
[Relating costs to activities in hospitals. Use of internal cost accounting].
Stavem, K
1995-01-10
During the last few years hospital cost accounting has become widespread in many countries, in parallel with increasing cost pressure, greater competition and new financing schemes. Cost accounting has been used in the manufacturing industry for many years. Costs can be related to activities and production, e.g. by the costing of procedures, episodes of care and other internally defined cost objectives. Norwegian hospitals have lagged behind in the adoption of cost accounting. They ought to act quickly if they want to be prepared for possible changes in health care financing. The benefits can be considerable to a hospital operating in a rapidly changing health care environment.
NASA Astrophysics Data System (ADS)
Farzanehpour, Mehdi; Tokatly, Ilya; Nano-Bio Spectroscopy Group; ETSF Scientific Development Centre Team
2015-03-01
We present a rigorous formulation of the time-dependent density functional theory for interacting lattice electrons strongly coupled to cavity photons. We start with an example of one particle on a Hubbard dimer coupled to a single photonic mode, which is equivalent to the single mode spin-boson model or the quantum Rabi model. For this system we prove that the electron-photon wave function is a unique functional of the electronic density and the expectation value of the photonic coordinate, provided the initial state and the density satisfy a set of well defined conditions. Then we generalize the formalism to many interacting electrons on a lattice coupled to multiple photonic modes and prove the general mapping theorem. We also show that for a system evolving from the ground state of a lattice Hamiltonian any density with a continuous second time derivative is locally v-representable. Spanish Ministry of Economy and Competitiveness (Grant No. FIS2013-46159-C3-1-P), Grupos Consolidados UPV/EHU del Gobierno Vasco (Grant No. IT578-13), COST Actions CM1204 (XLIC) and MP1306 (EUSpec).
Conceptual design of an advanced Stirling conversion system for terrestrial power generation
NASA Technical Reports Server (NTRS)
1988-01-01
A free piston Stirling engine coupled to an electric generator or alternator with a nominal kWe power output absorbing thermal energy from a nominal 100 square meter parabolic solar collector and supplying electric power to a utility grid was identified. The results of the conceptual design study of an Advanced Stirling Conversion System (ASCS) were documented. The objectives are as follows: define the ASCS configuration; provide a manufacturability and cost evaluation; predict ASCS performance over the range of solar input required to produce power; estimate system and major component weights; define engine and electrical power condidtioning control requirements; and define key technology needs not ready by the late 1980s in meeting efficiency, life, cost, and with goalds for the ASCS.
Cost characteristics of hospitals.
Smet, Mike
2002-09-01
Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.
Debt-maturity structures should match risk preferences.
Gapenski, L C
1999-12-01
Key to any debt-maturity matching strategy is financing assets with the appropriate debt structure. Financial managers need to establish an optimal capital structure and then choose the best maturity-matching structure for their debt. Two maturity-matching strategies that are available to healthcare financial managers are the accounting approach and the finance approach. The accounting approach, which defines asset maturities as current or fixed, is a riskier financing strategy than the finance approach, which defines asset maturities as permanent or temporary. The added risk occurs because of the accounting approach's heavy reliance on short-term debt. The accounting approach offers the potential for lower costs at the expense of higher risk. Healthcare financial managers who believe the financing function should support the organization's operations without adding undue risk should use the finance approach to maturity matching. Asset maturities in those organizations then should be considered permanent or temporary rather than current or fixed, and the debt-maturity structure should reflect this.
Analysis and Preliminary Design of an Advanced Technology Transport Flight Control System
NASA Technical Reports Server (NTRS)
Frazzini, R.; Vaughn, D.
1975-01-01
The analysis and preliminary design of an advanced technology transport aircraft flight control system using avionics and flight control concepts appropriate to the 1980-1985 time period are discussed. Specifically, the techniques and requirements of the flight control system were established, a number of candidate configurations were defined, and an evaluation of these configurations was performed to establish a recommended approach. Candidate configurations based on redundant integration of various sensor types, computational methods, servo actuator arrangements and data-transfer techniques were defined to the functional module and piece-part level. Life-cycle costs, for the flight control configurations, as determined in an operational environment model for 200 aircraft over a 15-year service life, were the basis of the optimum configuration selection tradeoff. The recommended system concept is a quad digital computer configuration utilizing a small microprocessor for input/output control, a hexad skewed set of conventional sensors for body rate and body acceleration, and triple integrated actuators.
Ancillary-service costs for 12 US electric utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, B.; Hirst, E.
1996-03-01
Ancillary services are those functions performed by electrical generating, transmission, system-control, and distribution-system equipment and people to support the basic services of generating capacity, energy supply, and power delivery. The Federal Energy Regulatory Commission defined ancillary services as ``those services necessary to support the transmission of electric power from seller to purchaser given the obligations of control areas and transmitting utilities within those control areas to maintain reliable operations of the interconnected transmission system.`` FERC divided these services into three categories: ``actions taken to effect the transaction (such as scheduling and dispatching services) , services that are necessary to maintainmore » the integrity of the transmission system [and] services needed to correct for the effects associated with undertaking a transaction.`` In March 1995, FERC published a proposed rule to ensure open and comparable access to transmission networks throughout the country. The rule defined six ancillary services and developed pro forma tariffs for these services: scheduling and dispatch, load following, system protection, energy imbalance, loss compensation, and reactive power/voltage control.« less
The cost of sustaining a patient-centered medical home: experience from 2 states.
Magill, Michael K; Ehrenberger, David; Scammon, Debra L; Day, Julie; Allen, Tatiana; Reall, Andreu J; Sides, Rhonda W; Kim, Jaewhan
2015-09-01
As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these "advanced primary care" functions. A key required input is personnel effort. This study's objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. © 2015 Annals of Family Medicine, Inc.
The Cost of Sustaining a Patient-Centered Medical Home: Experience From 2 States
Magill, Michael K.; Ehrenberger, David; Scammon, Debra L.; Day, Julie; Allen, Tatiana; Reall, Andreu J.; Sides, Rhonda W.; Kim, Jaewhan
2015-01-01
PURPOSE As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these “advanced primary care” functions. A key required input is personnel effort. This study’s objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. METHODS We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. RESULTS Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. CONCLUSIONS Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. PMID:26371263
Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D
2014-01-01
Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.
Specialty and Full-Service Hospitals: A Comparative Cost Analysis
Carey, Kathleen; Burgess, James F; Young, Gary J
2008-01-01
Objective To compare the costs of physician-owned cardiac, orthopedic, and surgical single specialty hospitals with those of full-service hospital competitors. Data Sources The primary data sources are the Medicare Cost Reports for 1998–2004 and hospital inpatient discharge data for three of the states where single specialty hospitals are most prevalent, Texas, California, and Arizona. The latter were obtained from the Texas Department of State Health Services, the California Office of Statewide Health Planning and Development, and the Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project. Additional data comes from the American Hospital Association Annual Survey Database. Study Design We identified all physician-owned cardiac, orthopedic, and surgical specialty hospitals in these three states as well as all full-service acute care hospitals serving the same market areas, defined using Dartmouth Hospital Referral Regions. We estimated a hospital cost function using stochastic frontier regression analysis, and generated hospital specific inefficiency measures. Application of t-tests of significance compared the inefficiency measures of specialty hospitals with those of full-service hospitals to make general comparisons between these classes of hospitals. Principal Findings Results do not provide evidence that specialty hospitals are more efficient than the full-service hospitals with whom they compete. In particular, orthopedic and surgical specialty hospitals appear to have significantly higher levels of cost inefficiency. Cardiac hospitals, however, do not appear to be different from competitors in this respect. Conclusions Policymakers should not embrace the assumption that physician-owned specialty hospitals produce patient care more efficiently than their full-service hospital competitors. PMID:18662170
Economic evaluation of intravenous iodinated contrast media in Italy.
Iannazzo, Sergio; Vandekerckhove, Stijn; De Francesco, Maria; Nayak, Akash; Ronco, Claudio; Morana, Giovanni; Valentino, Massimo
2014-01-01
Contrast-induced acute kidney injury (CI-AKI) is defined as a deterioration in renal function after administration of radiologic iodinated contrast media (CM). Iodixanol, showed a lower CI-AKI incidence than low-osmolar contrast media (LOCM). A cost-effectiveness analysis was performed comparing iodixanol and LOCM in intravenous (IV) setting in Italy. A Markov model was developed. Patients moved across four health states: CI-AKI free, CI-AKI, myocardial infarction, and death. The simulation horizon was lifetime with 1-month cycles. Costs and outcomes were discounted at 3.5 percent rate. CI-AKI incidence was considered from published literature across different definitions. Cost-effectiveness of iodixanol was assessed in terms of incremental cost per life-year gained. Net monetary benefit (NMB) was also calculated. Both deterministic and probabilistic sensitivity analyses were performed. Base-case results showed an average survival increase of 0.51 life-years and a savings of €7.25 for iodixanol versus LOCM. The cost-effectiveness of iodixanol was confirmed when other scenarios were explored, such as varying CI-AKI definition, sub-populations with specified risk factors, CM hospital bids prices, and inclusion of adverse drug reactions of allergic nature. An NMB ranging between €6,007.25 and €30,007.25 was calculated. Base-case results show that IV iodixanol is cost-effective compared with LOCM in the Italian clinical setting of a hospital computed tomography radiology practice. However, some caution is due, mainly linked to inherent limitations of the modeling technique and to the lack of agreement on CI-AKI incidence data in the clinical literature.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Wang, Tao; Chen, Liang; Huang, Hai-Jun
2018-01-01
In this paper, we introduce the fuel cost into each commuter's trip cost, define a new trip cost without late arrival and its corresponding equilibrium state, and use a car-following model to explore the impacts of the fuel cost on each commuter's departure time, departure interval, arrival time, arrival interval, traveling time, early arrival time and trip cost at the above equilibrium state. The numerical results show that considering the fuel cost in each commuter's trip cost has positive impacts on his trip cost and fuel cost, and the traffic situation in the system without late arrival, i.e., each commuter should explicitly consider the fuel cost in his trip cost.
Cost per responder of TNF-α therapies in Germany.
Gissel, Christian; Repp, Holger
2013-12-01
Tumor necrosis factor α (TNF-α) inhibitors ranked highest in German pharmaceutical expenditure in 2011. Their most important application is the treatment of rheumatoid arthritis (RA). Our objective is to analyze cost per responder of TNF-α inhibitors for RA from the German Statutory Health Insurance funds' perspective. We aim to conduct the analysis based on randomized comparative effectiveness studies of the relevant treatments for the German setting. For inclusion of effectiveness studies, we require results in terms of response rates as defined by European League Against Rheumatism (EULAR) or American College of Rheumatology (ACR) criteria. We identify conventional triple therapy as the relevant comparator. We calculate cost per responder based on German direct medical costs. Direct clinical comparisons could be identified for both etanercept and infliximab compared to triple therapy. For infliximab, cost per responder was 216,392 euros for ACR50 and 432,784 euros for ACR70 responses. For etanercept, cost per ACR70 responder was 321,527 euros. Cost was lower for response defined by EULAR criteria, but data was only available for infliximab. Cost per responder is overestimated by 40% due to inclusion of taxes and mandatory rebates in German drugs' list prices. Our analysis shows specific requirements for cost-effectiveness analysis in Germany. Cost per responder for TNF-α treatment in the German setting is more than double the cost estimated in a similar analysis for the USA, which measured against placebo. The difference in results shows the critical role of the correct comparator for a specific setting.
Gaßner, Heiko; Marxreiter, Franz; Steib, Simon; Kohl, Zacharias; Schlachetzki, Johannes C M; Adler, Werner; Eskofier, Bjoern M; Pfeifer, Klaus; Winkler, Jürgen; Klucken, Jochen
2017-01-01
Cognitive and gait deficits are common symptoms in Parkinson's disease (PD). Motor-cognitive dual tasks (DTs) are used to explore the interplay between gait and cognition. However, it is unclear if DT gait performance is indicative for cognitive impairment. Therefore, the aim of this study was to investigate if cognitive deficits are reflected by DT costs of spatiotemporal gait parameters. Cognitive function, single task (ST) and DT gait performance were investigated in 67 PD patients. Cognition was assessed by the Montreal Cognitive Assessment (MoCA) followed by a standardized, sensor-based gait test and the identical gait test while subtracting serial 3's. Cognitive impairment was defined by a MoCA score <26. DT costs in gait parameters [(DT - ST)/ST × 100] were calculated as a measure of DT effect on gait. Correlation analysis was used to evaluate the association between MoCA performance and gait parameters. In a linear regression model, DT gait costs and clinical confounders (age, gender, disease duration, motor impairment, medication, and depression) were correlated to cognitive performance. In a subgroup analysis, we compared matched groups of cognitively impaired and unimpaired PD patients regarding differences in ST, DT, and DT gait costs. Correlation analysis revealed weak correlations between MoCA score and DT costs of gait parameters ( r / r Sp ≤ 0.3). DT costs of stride length, swing time variability, and maximum toe clearance (| r / r Sp | > 0.2) were included in a regression analysis. The parameters only explain 8% of the cognitive variance. In combination with clinical confounders, regression analysis showed that these gait parameters explained 30% of MoCA performance. Group comparison revealed strong DT effects within both groups (large effect sizes), but significant between-group effects in DT gait costs were not observed. These findings suggest that DT gait performance is not indicative for cognitive impairment in PD. DT effects on gait parameters were substantial in cognitively impaired and unimpaired patients, thereby potentially overlaying the effect of cognitive impairment on DT gait costs. Limits of the MoCA in detecting motor-function specific cognitive performance or variable individual response to the DT as influencing factors cannot be excluded. Therefore, DT gait parameters as marker for cognitive performance should be carefully interpreted in the clinical context.
Schawo, Saskia J; van Eeren, Hester; Soeteman, Djira I; van der Veldt, Marie-Christine; Noom, Marc J; Brouwer, Werner; Busschbach, Jan J V; Hakkaart, Leona
2012-12-01
Many interventions initiated within and financed from the health care sector are not necessarily primarily aimed at improving health. This poses important questions regarding the operationalisation of economic evaluations in such contexts. We investigated whether assessing cost-effectiveness using state-of-the-art methods commonly applied in health care evaluations is feasible and meaningful when evaluating interventions aimed at reducing youth delinquency. A probabilistic Markov model was constructed to create a framework for the assessment of the cost-effectiveness of systemic interventions in delinquent youth. For illustrative purposes, Functional Family Therapy (FFT), a systemic intervention aimed at improving family functioning and, primarily, reducing delinquent activity in youths, was compared to Treatment as Usual (TAU). "Criminal activity free years" (CAFYs) were introduced as central outcome measure. Criminal activity may e.g. be based on police contacts or committed crimes. In absence of extensive data and for illustrative purposes the current study based criminal activity on available literature on recidivism. Furthermore, a literature search was performed to deduce the model's structure and parameters. Common cost-effectiveness methodology could be applied to interventions for youth delinquency. Model characteristics and parameters were derived from literature and ongoing trial data. The model resulted in an estimate of incremental costs/CAFY and included long-term effects. Illustrative model results point towards dominance of FFT compared to TAU. Using a probabilistic model and the CAFY outcome measure to assess cost-effectiveness of systemic interventions aimed to reduce delinquency is feasible. However, the model structure is limited to three states and the CAFY measure was defined rather crude. Moreover, as the model parameters are retrieved from literature the model results are illustrative in the absence of empirical data. The current model provides a framework to assess the cost-effectiveness of systemic interventions, while taking into account parameter uncertainty and long-term effectiveness. The framework of the model could be used to assess the cost-effectiveness of systemic interventions alongside (clinical) trial data. Consequently, it is suitable to inform reimbursement decisions, since the value for money of systemic interventions can be demonstrated using a decision analytic model. Future research could be focussed on testing the current model based on extensive empirical data, improving the outcome measure and finding appropriate values for that outcome.
Quantitative evolutionary design
Diamond, Jared
2002-01-01
The field of quantitative evolutionary design uses evolutionary reasoning (in terms of natural selection and ultimate causation) to understand the magnitudes of biological reserve capacities, i.e. excesses of capacities over natural loads. Ratios of capacities to loads, defined as safety factors, fall in the range 1.2-10 for most engineered and biological components, even though engineered safety factors are specified intentionally by humans while biological safety factors arise through natural selection. Familiar examples of engineered safety factors include those of buildings, bridges and elevators (lifts), while biological examples include factors of bones and other structural elements, of enzymes and transporters, and of organ metabolic performances. Safety factors serve to minimize the overlap zone (resulting in performance failure) between the low tail of capacity distributions and the high tail of load distributions. Safety factors increase with coefficients of variation of load and capacity, with capacity deterioration with time, and with cost of failure, and decrease with costs of initial construction, maintenance, operation, and opportunity. Adaptive regulation of many biological systems involves capacity increases with increasing load; several quantitative examples suggest sublinear increases, such that safety factors decrease towards 1.0. Unsolved questions include safety factors of series systems, parallel or branched pathways, elements with multiple functions, enzyme reaction chains, and equilibrium enzymes. The modest sizes of safety factors imply the existence of costs that penalize excess capacities. Those costs are likely to involve wasted energy or space for large or expensive components, but opportunity costs of wasted space at the molecular level for minor components. PMID:12122135
A contemporary perspective on capitated reimbursement for imaging services.
Schwartz, H W
1995-01-01
Capitation ensures predictability of healthcare costs, requires acceptance of a premium in return for providing all required medical services and defines the actual dollar amount paid to a physician or hospital on a per member per month basis for a service or group of services. Capitation is expected to dramatically affect the marketplace in the near future, as private enterprise demands lower, more stable healthcare costs. Capitation requires detailed quantitative and financial data, including: eligibility and benefits determination, encounter processing, referral management, claims processing, case management, physician compensation, insurance management functions, outcomes reporting, performance management and cost accounting. It is important to understand actuarial risk and capitation marketing when considering a capitation contract. Also, capitated payment methodologies may vary to include modified fee-for-service, incentive pay, risk pool redistributions, merit, or a combination. Risk is directly related to the ability to predict utilization and unit cost of imaging services provided to a specific insured population. In capitated environments, radiologists will have even less control over referrals than they have today and will serve many more "covered lives"; long-term relationships with referring physicians will continue to evaporate; and services will be provided under exclusive, multi-year contracts. In addition to intensified use of technology for image transfer, telecommunications and sophisticated data processing and tracking systems, imaging departments must continue to provide the greatest amount of appropriate diagnostic information in a timely fashion at the lowest feasible cost and risk to the patient.
Low cost balancing unit design
NASA Astrophysics Data System (ADS)
Golembiovsky, Matej; Dedek, Jan; Slanina, Zdenek
2017-06-01
This article deals with the design of a low-cost balancing system which consist of battery balancing units, accumulator pack units and coordinator unit with interface for higher level of battery management system. This solution allows decentralized mode of operation and the aim of this work is implementation of controlling and diagnostic mechanism into an electric scooter project realized at Technical university of Ostrava. In todays world which now fully enjoys the prime of electromobility, off-grid battery systems and other, it is important to seek the optimal balance between functionality and the economy side of BMS that being electronics which deals with secondary cells of batery packs. There were numerous sophisticated, but not too practical BMS models in the past, such as centralized system or standalone balance modules of individual cells. This article aims at development of standalone balance modules which are able to communicate with the coordinator, adjust their parameters and ensure their cells safety in case of a communication failure. With the current worldwide cutting cost trend in mind, the emphasis was put on the lowest price possible for individual component. The article is divided into two major categories, the first one being desing of power electronics with emphasis on quality, safety (cooling) and also cost. The second part describes development of a communication interface with reliability and cost in mind. The article contains numerous graphs from practical measurements. The outcome of the work and its possible future is defined in the conclusion.
A Cost-Effectiveness/Benefit Analysis Model for Postsecondary Vocational Programs. Technical Report.
ERIC Educational Resources Information Center
Kim, Jin Eun
A cost-effectiveness/benefit analysis is defined as a technique for measuring the outputs of existing and new programs in relation to their specified program objectives, against the costs of those programs. In terms of its specific use, the technique is conceptualized as a systems analysis method, an evaluation method, and a planning tool for…
Optimal fire and fuels management
Evan Mercer; Greg Jones
2007-01-01
Record suppression costs have led to a multitude of fire cost reviews and cost studies by oversight agencies, and new rules and regulations. One of the most important and elusive issues in fire management is defining the "best" amount of fuel treatments to apply to a forested landscape. Research is developing tools and information that address a wide variety...
Price-Cost Ratios in Higher Education: Subsidy Structure and Policy Implications
ERIC Educational Resources Information Center
Xie, Yan
2010-01-01
The diversity of US institutions of higher education is manifested in many ways. This study looks at that diversity from the economic perspective by studying the subsidy structure through the distribution of institutional price-cost ratio (PCR), defined as the sum of net tuition price divided by total supplier cost and equals to one minus…
42 CFR 423.578 - Exceptions process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... request is the therapeutic equivalent, as defined in § 423.100, of any other drug on the plan's formulary... sponsor required to cover a non-preferred drug at the generic drug cost-sharing level if the plan... provided for a non-formulary drug. (3) If the Part D plan sponsor covers a non-formulary drug, the cost(s...
42 CFR 423.578 - Exceptions process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... request is the therapeutic equivalent, as defined in § 423.100, of any other drug on the plan's formulary... sponsor required to cover a non-preferred drug at the generic drug cost-sharing level if the plan... provided for a non-formulary drug. (3) If the Part D plan sponsor covers a non-formulary drug, the cost(s...
NASA Technical Reports Server (NTRS)
Miller, R. H.; Smith, D. B. S.
1979-01-01
Production and support equipment specifications are described for the space manufacturing facility (SMF). Defined production equipment includes electromagnetic pumps for liquid metal, metal alloying furnaces, die casters, electron beam welders and cutters, glass forming for structural elements, and rolling. A cost analysis is presented which includes the development, the aquisition of all SMF elements, initial operating cost, maintenance and logistics cost, cost of terrestrial materials, and transportation cost for each major element. Computer program listings and outputs are appended.
Managerial accounting applications in radiology.
Lexa, Frank James; Mehta, Tushar; Seidmann, Abraham
2005-03-01
We review the core issues in managerial accounting for radiologists. We introduce the topic and then explore its application to diagnostic imaging. We define key terms such as fixed cost, variable cost, marginal cost, and marginal revenue and discuss their role in understanding the operational and financial implications for a radiology facility by using a cost-volume-profit model. Our work places particular emphasis on the role of managerial accounting in understanding service costs, as well as how it assists executive decision making.
Cost minimizing of cutting process for CNC thermal and water-jet machines
NASA Astrophysics Data System (ADS)
Tavaeva, Anastasia; Kurennov, Dmitry
2015-11-01
This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.
Fisch, Clifford B.; Fisch, Martin L.
1979-01-01
The Stanley S. Lamm Institute for Developmental Disabilities of The Long Island College Hospital, in conjunction with Micro-Med Systems has developed a low cost micro-computer based information system (ADDOP TRS) which monitors quality of care in outpatient settings rendering services to the developmentally disabled population. The process of conversion from paper record keeping systems to direct key-to-disk data capture at the point of service delivery is described. Data elements of the information system including identifying patient information, coded and English-grammar entry procedures for tracking elements of service as well as their delivery status are described. Project evaluation criteria are defined including improved quality of care, improved productivity for clerical and professional staff and enhanced decision making capability. These criteria are achieved in a cost effective manner as a function of more efficient information flow. Administrative applications including staff/budgeting procedures, submissions for third party reimbursement and case reporting to utilization review committees are considered.
Energy management and recovery
NASA Technical Reports Server (NTRS)
Lawing, Pierce L.
1989-01-01
Energy management is treated by first exploring the energy requirements for a cryogenic tunnel. The requirement is defined as a function of Mach number, Reynolds number, temperature, and tunnel size. A simple program and correlation is described which allow calculation of the energy required. Usage of energy is also addressed in terms of tunnel control and research operation. The potential of a new wet expander is outlined in terms of cost saved by reliquefying a portion of the exhaust. The expander is described as a potentially more efficient way of recovering a fraction of the cold nitrogen gas normally exhausted to the atmosphere from a cryogenic tunnel. The role of tunnel insulation systems is explored in terms of requirements, safety, cost, maintenance, and efficiency. A detailed description of two external insulation systems is given. One is a rigid foam with a fiber glass and epoxy shell. The other is composed of glass fiber mats with a flexible outer vapor barrier; this system is nitrogen purged. The two systems are compared with the purged system being judged superior.
Toward a Responsibility-Catering Prioritarian Ethical Theory of Risk.
Wikman-Svahn, Per; Lindblom, Lars
2018-03-05
Standard tools used in societal risk management such as probabilistic risk analysis or cost-benefit analysis typically define risks in terms of only probabilities and consequences and assume a utilitarian approach to ethics that aims to maximize expected utility. The philosopher Carl F. Cranor has argued against this view by devising a list of plausible aspects of the acceptability of risks that points towards a non-consequentialist ethical theory of societal risk management. This paper revisits Cranor's list to argue that the alternative ethical theory responsibility-catering prioritarianism can accommodate the aspects identified by Cranor and that the elements in the list can be used to inform the details of how to view risks within this theory. An approach towards operationalizing the theory is proposed based on a prioritarian social welfare function that operates on responsibility-adjusted utilities. A responsibility-catering prioritarian ethical approach towards managing risks is a promising alternative to standard tools such as cost-benefit analysis.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.
Li, Min; Zhang, John Zenghui; Xia, Fei
2016-04-12
Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.