Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
A study is in-process to develop a multivariable parametric cost model for space telescopes. Cost and engineering parametric data has been collected on 30 different space telescopes. Statistical correlations have been developed between 19 variables of 59 variables sampled. Single Variable and Multi-Variable Cost Estimating Relationships have been developed. Results are being published.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Multivariable Parametric Cost Model for Ground Optical Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2005-01-01
A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.
Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.
Preliminary Multi-Variable Parametric Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.
Preliminary Multivariable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Cost Modeling for Space Telescope
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.
Preliminary Multi-Variable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
Preliminary Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd
2009-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.
Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2012-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.
Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd
2016-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
Multivariable parametric cost model for space and ground telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; Henrichs, Todd
2016-09-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost (X) D (1.75 +/- 0.05) λ (-0.5 +/- 0.25) T-0.25 e (-0.04) Y Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
Gao, Lan; Hu, Hao; Zhao, Fei-Li; Li, Shu-Chuen
2016-01-01
Objectives To systematically review cost of illness studies for schizophrenia (SC), epilepsy (EP) and type 2 diabetes mellitus (T2DM) and explore the transferability of direct medical cost across countries. Methods A comprehensive literature search was performed to yield studies that estimated direct medical costs. A generalized linear model (GLM) with gamma distribution and log link was utilized to explore the variation in costs that accounted by the included factors. Both parametric (Random-effects model) and non-parametric (Boot-strapping) meta-analyses were performed to pool the converted raw cost data (expressed as percentage of GDP/capita of the country where the study was conducted). Results In total, 93 articles were included (40 studies were for T2DM, 34 studies for EP and 19 studies for SC). Significant variances were detected inter- and intra-disease classes for the direct medical costs. Multivariate analysis identified that GDP/capita (p<0.05) was a significant factor contributing to the large variance in the cost results. Bootstrapping meta-analysis generated more conservative estimations with slightly wider 95% confidence intervals (CI) than the parametric meta-analysis, yielding a mean (95%CI) of 16.43% (11.32, 21.54) for T2DM, 36.17% (22.34, 50.00) for SC and 10.49% (7.86, 13.41) for EP. Conclusions Converting the raw cost data into percentage of GDP/capita of individual country was demonstrated to be a feasible approach to transfer the direct medical cost across countries. The approach from our study to obtain an estimated direct cost value along with the size of specific disease population from each jurisdiction could be used for a quick check on the economic burden of particular disease for countries without such data. PMID:26814959
NASA Astrophysics Data System (ADS)
Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar
2015-06-01
In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
NASA Technical Reports Server (NTRS)
Sanchez Pena, Ricardo S.; Sideris, Athanasios
1988-01-01
A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.
Weight and the Future of Space Flight Hardware Cost Modeling
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2003-01-01
Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.
Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.
Thulin, M
2016-09-10
Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Elgart, Jorge Federico; Prestes, Mariana; Gonzalez, Lorena; Rucci, Enzo; Gagliardino, Juan Jose
2017-01-01
Despite the frequent association of obesity with type 2 diabetes (T2D), the effect of the former on the cost of drug treatment of the latest has not been specifically addressed. We studied the association of overweight/obesity on the cost of drug treatment of hyperglycemia, hypertension and dyslipidemia in a population with T2D. This observational study utilized data from the QUALIDIAB database on 3,099 T2D patients seen in Diabetes Centers in Argentina, Chile, Colombia, Peru, and Venezuela. Data were grouped according to body mass index (BMI) as Normal (18.5≤BMI<25), Overweight (25≤BMI<30), and Obese (BMI≥30). Thereafter, we assessed clinical and metabolic data and cost of drug treatment in each category. Statistical analyses included group comparisons for continuous variables (parametric or non-parametric tests), Chi-square tests for differences between proportions, and multivariable regression analysis to assess the association between BMI and monthly cost of drug treatment. Although all groups showed comparable degree of glycometabolic control (FBG, HbA1c), we found significant differences in other metabolic control indicators. Total cost of drug treatment of hyperglycemia and associated cardiovascular risk factors (CVRF) increased significantly (p<0.001) with increment of BMI. Hyperglycemia treatment cost showed a significant increase concordant with BMI whereas hypertension and dyslipidemia did not. Despite different values and percentages of increase, this growing cost profile was reproduced in every participating country. BMI significantly and independently affected hyperglycemia treatment cost. Our study shows for the first time that BMI significantly increases total expenditure on drugs for T2D and its associated CVRF treatment in Latin America.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...
Update on Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models
A Simpli ed, General Approach to Simulating from Multivariate Copula Functions
Barry Goodwin
2012-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, L.T.; Hickey, M.
This paper summarizes the progress to date by CH2M HILL and the UKAEA in development of a parametric modelling capability for estimating the costs of large nuclear decommissioning projects in the United Kingdom (UK) and Europe. The ability to successfully apply parametric cost estimating techniques will be a key factor to commercial success in the UK and European multi-billion dollar waste management, decommissioning and environmental restoration markets. The most useful parametric models will be those that incorporate individual components representing major elements of work: reactor decommissioning, fuel cycle facility decommissioning, waste management facility decommissioning and environmental restoration. Models must bemore » sufficiently robust to estimate indirect costs and overheads, permit pricing analysis and adjustment, and accommodate the intricacies of international monetary exchange, currency fluctuations and contingency. The development of a parametric cost estimating capability is also a key component in building a forward estimating strategy. The forward estimating strategy will enable the preparation of accurate and cost-effective out-year estimates, even when work scope is poorly defined or as yet indeterminate. Preparation of cost estimates for work outside the organizations current sites, for which detailed measurement is not possible and historical cost data does not exist, will also be facilitated. (authors)« less
Upatising, Benjavan; Wood, Douglas L; Kremers, Walter K; Christ, Sharon L; Yih, Yuehwern; Hanson, Gregory J; Takahashi, Paul Y
2015-01-01
From 1992 to 2008, older adults in the United States incurred more healthcare expense per capita than any other age group. Home telemonitoring has emerged as a potential solution to reduce these costs, but evidence is mixed. The primary aim of the study was to evaluate whether the mean difference in total direct medical cost consequence between older adults receiving additional home telemonitoring care (TELE) (n=102) and those receiving usual medical care (UC) (n=103) were significant. Inpatient, outpatient, emergency department, decedents, survivors, and 30-day readmission costs were evaluated as secondary aim. Multivariate generalized linear models (GLMs) and parametric bootstrapping method were used to model cost and to determine significance of the cost differences. We also compared the differences in arithmetic mean costs. From the conditional GLMs, the estimated mean cost differences (TELE versus UC) for total, inpatient, outpatient, and ED were -$9,537 (p=0.068), -$8,482 (p =0.098), -$1,160 (p=0.177), and $106 (p=0.619), respectively. Mean postenrollment cost was 11% lower than the prior year for TELE versus 22% higher for UC. The ratio of mean cost for decedents to survivors was 2.1:1 (TELE) versus 12.7:1 (UC). There were no significant differences in the mean total cost between the two treatment groups. The TELE group had less variability in cost of care, lower decedents to survivors cost ratio, and lower total 30-day readmission cost than the UC group.
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
A note on a simplified and general approach to simulating from multivariate copula functions
Barry K. Goodwin
2013-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses âProbability-...
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
X-1 to X-Wings: Developing a Parametric Cost Model
NASA Technical Reports Server (NTRS)
Sterk, Steve; McAtee, Aaron
2015-01-01
In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.
Upatising, Benjavan; Wood, Douglas L.; Kremers, Walter K.; Christ, Sharon L.; Yih, Yuehwern; Hanson, Gregory J.
2015-01-01
Abstract Background: From 1992 to 2008, older adults in the United States incurred more healthcare expense per capita than any other age group. Home telemonitoring has emerged as a potential solution to reduce these costs, but evidence is mixed. The primary aim of the study was to evaluate whether the mean difference in total direct medical cost consequence between older adults receiving additional home telemonitoring care (TELE) (n=102) and those receiving usual medical care (UC) (n=103) were significant. Inpatient, outpatient, emergency department, decedents, survivors, and 30-day readmission costs were evaluated as secondary aim. Materials and Methods: Multivariate generalized linear models (GLMs) and parametric bootstrapping method were used to model cost and to determine significance of the cost differences. We also compared the differences in arithmetic mean costs. Results: From the conditional GLMs, the estimated mean cost differences (TELE versus UC) for total, inpatient, outpatient, and ED were −$9,537 (p=0.068), −$8,482 (p =0.098), −$1,160 (p=0.177), and $106 (p=0.619), respectively. Mean postenrollment cost was 11% lower than the prior year for TELE versus 22% higher for UC. The ratio of mean cost for decedents to survivors was 2.1:1 (TELE) versus 12.7:1 (UC). Conclusions: There were no significant differences in the mean total cost between the two treatment groups. The TELE group had less variability in cost of care, lower decedents to survivors cost ratio, and lower total 30-day readmission cost than the UC group. PMID:25453392
Fast computation of the multivariable stability margin for real interrelated uncertain parameters
NASA Technical Reports Server (NTRS)
Sideris, Athanasios; Sanchez Pena, Ricardo S.
1988-01-01
A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
Marginally specified priors for non-parametric Bayesian estimation
Kessler, David C.; Hoff, Peter D.; Dunson, David B.
2014-01-01
Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813
A semi-parametric within-subject mixture approach to the analyses of responses and response times.
Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K
2018-05-01
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.
Lie, Octavian V; van Mierlo, Pieter
2017-01-01
The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (<60). The aim of this study was to test two Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Warton, David I; Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.
Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071
Estimation of railroad capacity using parametric methods.
DOT National Transportation Integrated Search
2013-12-01
This paper reviews different methodologies used for railroad capacity estimation and presents a user-friendly method to measure capacity. The objective of this paper is to use multivariate regression analysis to develop a continuous relation of the d...
A population-based study of hospital care costs during five years after TIA and stroke
Luengo-Fernandez, Ramon; Gray, Alastair M.; Rothwell, Peter M.
2016-01-01
Background and Purpose Few studies have evaluated long-term costs after stroke onset, with almost no cost data for TIA. We studied hospital costs during the 5 years after TIA or stroke in a population-based study. Methods Patients from a UK population-based cohort study (Oxford Vascular Study) were recruited from 2002 to 2007. Analysis was based on follow-up until 2010. Hospital resource usage was obtained from patients’ hospital records and valued using 2008/09 unit costs. As not all patients had full 5-year follow-up, we used non-parametric censoring techniques. Results Among 485 TIA and 729 stroke patients ascertained and included, mean censor-adjusted 5-year hospital costs after index stroke were $25,741 (95% CI: 23,659-27,914), with costs varying considerably by severity: $21,134 after minor stroke, $33,119 after moderate stroke, and $28,552 after severe stroke. For the 239 surviving stroke patients who had reached final follow-up, mean costs were $24,383 (20,156-28,595), with over half of costs ($12,972) being incurred in the first year after the event. After index TIA, the mean censor-adjusted 5-year costs were $18,091 (15,947-20,258). A multivariate analysis showed that event severity, recurrent stroke and coronary events after the index event were independent predictors of 5-year costs. Differences by stroke subtype were mostly explained by stroke severity and subsequent events. Conclusions Long-term hospital costs after TIA and stroke are considerable, but are mainly incurred over the first year after the index event. Event severity and suffering subsequent stroke and coronary events after the index event accounted for much of the increase in costs. PMID:23160884
Introduction to multivariate discrimination
NASA Astrophysics Data System (ADS)
Kégl, Balázs
2013-07-01
Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either relevant to or even motivated by certain unorthodox applications of multivariate discrimination in experimental physics.
Lindholm, C; Gustavsson, A; Jönsson, L; Wimo, A
2013-05-01
Because the prevalence of many brain disorders rises with age, and brain disorders are costly, the economic burden of brain disorders will increase markedly during the next decades. The purpose of this study is to analyze how the costs to society vary with different levels of functioning and with the presence of a brain disorder. Resource utilization and costs from a societal viewpoint were analyzed versus cognition, activities of daily living (ADL), instrumental activities of daily living (IADL), brain disorder diagnosis and age in a population-based cohort of people aged 65 years and older in Nordanstig in Northern Sweden. Descriptive statistics, non-parametric bootstrapping and a generalized linear model (GLM) were used for the statistical analyses. Most people were zero users of care. Societal costs of dementia were by far the highest, ranging from SEK 262,000 (mild) to SEK 519,000 per year (severe dementia). In univariate analysis, all measures of functioning were significantly related to costs. When controlling for ADL and IADL in the multivariate GLM, cognition did not have a statistically significant effect on total cost. The presence of a brain disorder did not impact total cost when controlling for function. The greatest shift in costs was seen when comparing no dependency in ADL and dependency in one basic ADL function. It is the level of functioning, rather than the presence of a brain disorder diagnosis, which predicts costs. ADLs are better explanatory variables of costs than Mini mental state examination. Most people in a population-based cohort are zero users of care. Copyright © 2012 John Wiley & Sons, Ltd.
Parametric Analysis of Light Truck and Automobile Maintenance
DOT National Transportation Integrated Search
1979-05-01
Utilizing the Automotive and Light Truck Service and Repair Data Base developed in the campanion report, parametric analyses were made of the relationships between maintenance costs, schduled and unschduled, and vehicle parameters; body class, manufa...
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Constellation Program Life-cycle Cost Analysis Model (LCAM)
NASA Technical Reports Server (NTRS)
Prince, Andy; Rose, Heidi; Wood, James
2008-01-01
The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.
Nadal-Serrano, Jose M; Nadal-Serrano, Adolfo; Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts.
Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts. PMID:29112987
Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.
Soleimani, Hossein; Hensman, James; Saria, Suchi
2017-08-21
Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.
NASA Astrophysics Data System (ADS)
Ayoko, Godwin A.; Singh, Kirpal; Balerea, Steven; Kokot, Serge
2007-03-01
SummaryPhysico-chemical properties of surface water and groundwater samples from some developing countries have been subjected to multivariate analyses by the non-parametric multi-criteria decision-making methods, PROMETHEE and GAIA. Complete ranking information necessary to select one source of water in preference to all others was obtained, and this enabled relationships between the physico-chemical properties and water quality to be assessed. Thus, the ranking of the quality of the water bodies was found to be strongly dependent on the total dissolved solid, phosphate, sulfate, ammonia-nitrogen, calcium, iron, chloride, magnesium, zinc, nitrate and fluoride contents of the waters. However, potassium, manganese and zinc composition showed the least influence in differentiating the water bodies. To model and predict the water quality influencing parameters, partial least squares analyses were carried out on a matrix made up of the results of water quality assessment studies carried out in Nigeria, Papua New Guinea, Egypt, Thailand and India/Pakistan. The results showed that the total dissolved solid, calcium, sulfate, sodium and chloride contents can be used to predict a wide range of physico-chemical characteristics of water. The potential implications of these observations on the financial and opportunity costs associated with elaborate water quality monitoring are discussed.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
A Parametric Regression of the Cost of Base Realignment Action (COBRA) Model
1993-09-20
Douglas D. Hardman , Captain, USAF Michael S. Nelson, Captain, USAF AFIT/GEE/ENS/93S-03 93 P’ 8 143 Approved for public release, distribution unlimited 93... Hardman CLASS: GEE 93S Captain Michael Nelson TITLE: A Parametric Regression of the Cost of Base Realignment Action (COBRA) Model DEFENSE DATE: 20...Science in Engineering and Environmental Management Douglas D. Hardman , B.S.E.E. Michael S. Nelson, B.S.C.E Captain, USAF Captain, USAF September 1993
Diagnostic tools for nearest neighbors techniques when used with satellite imagery
Ronald E. McRoberts
2009-01-01
Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....
Cost model validation: a technical and cultural approach
NASA Technical Reports Server (NTRS)
Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.
2001-01-01
This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.
Jordan, Jake; Gage, Heather; Benton, Barbara; Lalji, Amyn; Norton, Christine; Andreyev, H Jervoise N
2017-01-01
Over 20 distressing gastrointestinal symptoms affect many patients after pelvic radiotherapy, but in the United Kingdom few are referred for assessment. Algorithmic-based treatment delivered by either a consultant gastroenterologist or a clinical nurse specialist has been shown in a randomized trial to be statistically and clinically more effective than provision of a self-help booklet. In this study, we assessed cost-effectiveness. Outcomes were measured at baseline (pre-randomization) and 6 months. Change in quality-adjusted life years (QALYs) was the primary outcome for the economic evaluation; a secondary analysis used change in the bowel subset score of the modified Inflammatory Bowel Disease Questionnaire (IBDQ-B). Intervention costs, British pounds 2013, covered visits with the gastroenterologist or nurse, investigations, medications and treatments. Incremental outcomes and incremental costs were estimated simultaneously using multivariate linear regression. Uncertainty was handled non-parametrically using bootstrap with replacement. The mean (SD) cost of treatment was £895 (499) for the nurse and £1101 (567) for the consultant. The nurse was dominated by usual care, which was cheaper and achieved better outcomes. The mean cost per QALY gained from the consultant, compared to usual care, was £250,455; comparing the consultant to the nurse, it was £25,875. Algorithmic care produced better outcomes compared to the booklet only, as reflected in the IBDQ-B results, at a cost of ~£1,000. Algorithmic treatment of radiation bowel injury by a consultant or a nurse results in significant symptom relief for patients but was not found to be cost-effective according to the National Institute for Health and Care Excellence (NICE) criteria.
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Development of a Multivariable Parametric Cost Analysis for Space-Based Telescopes
NASA Technical Reports Server (NTRS)
Dollinger, Courtnay
2011-01-01
Over the past 400 years, the telescope has proven to be a valuable tool in helping humankind understand the Universe around us. The images and data produced by telescopes have revolutionized planetary, solar, stellar, and galactic astronomy and have inspired a wide range of people, from the child who dreams about the images seen on NASA websites to the most highly trained scientist. Like all scientific endeavors, astronomical research must operate within the constraints imposed by budget limitations. Hence the importance of understanding cost: to find the balance between the dreams of scientists and the restrictions of the available budget. By logically analyzing the data we have collected for over thirty different telescopes from more than 200 different sources, statistical methods, such as plotting regressions and residuals, can be used to determine what drives the cost of telescopes to build and use a cost model for space-based telescopes. Previous cost models have focused their attention on ground-based telescopes due to limited data for space telescopes and the larger number and longer history of ground-based astronomy. Due to the increased availability of cost data from recent space-telescope construction, we have been able to produce and begin testing a comprehensive cost model for space telescopes, with guidance from the cost models for ground-based telescopes. By separating the variables that effect cost such as diameter, mass, wavelength, density, data rate, and number of instruments, we advance the goal to better understand the cost drivers of space telescopes.. The use of sophisticated mathematical techniques to improve the accuracy of cost models has the potential to help society make informed decisions about proposed scientific projects. An improved knowledge of cost will allow scientists to get the maximum value returned for the money given and create a harmony between the visions of scientists and the reality of a budget.
NASA Technical Reports Server (NTRS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-01-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Cost Modeling for low-cost planetary missions
NASA Technical Reports Server (NTRS)
Kwan, Eric; Habib-Agahi, Hamid; Rosenberg, Leigh
2005-01-01
This presentation will provide an overview of the JPL parametric cost models used to estimate flight science spacecrafts and instruments. This material will emphasize the cost model approaches to estimate low-cost flight hardware, sensors, and instrumentation, and to perform cost-risk assessments. This presentation will also discuss JPL approaches to perform cost modeling and the methodologies and analyses used to capture low-cost vs. key cost drivers.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
NASA Technical Reports Server (NTRS)
Wolfe, R. W.
1976-01-01
A parametric analysis was made of three types of advanced steam power plants that use coal in order to have a comparison of the cost of electricity produced by them a wide range of primary performance variables. Increasing the temperature and pressure of the steam above current industry levels resulted in increased energy costs because the cost of capital increased more than the fuel cost decreased. While the three plant types produced comparable energy cost levels, the pressurized fluidized bed boiler plant produced the lowest energy cost by the small margin of 0.69 mills/MJ (2.5 mills/kWh). It is recommended that this plant be designed in greater detail to determine its cost and performance more accurately than was possible in a broad parametric study and to ascertain problem areas which will require development effort. Also considered are pollution control measures such as scrubbers and separates for particulate emissions from stack gases.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Dual adaptive control: Design principles and applications
NASA Technical Reports Server (NTRS)
Mookerjee, Purusottam
1988-01-01
The design of an actively adaptive dual controller based on an approximation of the stochastic dynamic programming equation for a multi-step horizon is presented. A dual controller that can enhance identification of the system while controlling it at the same time is derived for multi-dimensional problems. This dual controller uses sensitivity functions of the expected future cost with respect to the parameter uncertainties. A passively adaptive cautious controller and the actively adaptive dual controller are examined. In many instances, the cautious controller is seen to turn off while the latter avoids the turn-off of the control and the slow convergence of the parameter estimates, characteristic of the cautious controller. The algorithms have been applied to a multi-variable static model which represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. Monte Carlo comparisons based on parametric and nonparametric statistical analysis indicate the superiority of the dual controller over the baseline controller.
The costs of transit fare prepayment programs : a parametric cost analysis.
DOT National Transportation Integrated Search
Despite the renewed interest in transit fare prepayment plans over the past : 10 years, few transit managers have a clear idea of how much it costs to operate : and maintain a fare prepayment program. This report provides transit managers : with the ...
1989-07-31
Information System (OSMIS). The long-range objective is to develop methods to determine total operating and support (O&S) costs within life-cycle cost...objective was to assess the feasibility of developing cost estimating relationships (CERs) based on data from the Army Operating and Support Management
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
Parametric Surfaces Competition: Using Technology to Foster Creativity
ERIC Educational Resources Information Center
Kaur, Manmohan; Wangler, Thomas
2014-01-01
Although most calculus students are comfortable with the Cartesian equations of curves and surfaces, they struggle with the concept of parameters. A multivariable calculus course is really the time to nail this concept down, once and for all, since it provides an easy way to represent many beautiful and useful surfaces, and graph them using a…
Turboprop cargo aircraft systems study
NASA Technical Reports Server (NTRS)
Muehlbauer, J. C.; Hewell, J. G., Jr.; Lindenbaum, S. P.; Randall, C. C.; Searle, N.; Stone, R. G., Jr.
1981-01-01
The effects of using advanced turboprop propulsion systems to reduce the fuel consumption and direct operating costs of cargo aircraft were studied, and the impact of these systems on aircraft noise and noise prints around a terminal area was determined. Parametric variations of aircraft and propeller characteristics were investigated to determine their effects on noiseprint areas, fuel consumption, and direct operating costs. From these results, three aircraft designs were selected and subjected to design refinements and sensitivity analyses. Three competitive turbofan aircraft were also defined from parametric studies to provide a basis for comparing the two types of propulsion.
Multivariate Drought Characterization in India for Monitoring and Prediction
NASA Astrophysics Data System (ADS)
Sreekumaran Unnithan, P.; Mondal, A.
2016-12-01
Droughts are one of the most important natural hazards that affect the society significantly in terms of mortality and productivity. The metric that is most widely used by the India Meteorological Department (IMD) to monitor and predict the occurrence, spread, intensification and termination of drought is based on the univariate Standardized Precipitation Index (SPI). However, droughts may be caused by the influence and interaction of many variables (such as precipitation, soil moisture, runoff, etc.), emphasizing the need for a multivariate approach for drought characterization. This study advocates and illustrates use of the recently proposed multivariate standardized drought index (MSDI) in monitoring and prediction of drought and assessing its concerned risk in the Indian region. MSDI combines information from multiple sources: precipitation and soil moisture, and has been deemed to be a more reliable drought index. All-India monthly rainfall and soil moisture data sets are analysed for the period 1980 to 2014 to characterize historical droughts using both the univariate indices, the precipitation-based SPI and the standardized soil moisture index (SSI), as well as the multivariate MSDI using parametric and non-parametric approaches. We confirm that MSDI can capture droughts of 1986 and 1990 that aren't detected by using SPI alone. Moreover, in 1987, MSDI indicated a higher severity of drought when a deficiency in both soil moisture and precipitation was encountered. Further, this study also explores the use of MSDI for drought forecasts and assesses its performance vis-à-vis existing predictions from the IMD. Future research efforts will be directed towards formulating a more robust standardized drought indicator that can take into account socio-economic aspects that also play a key role for water-stressed regions such as India.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
Study of solid rocket motor for space shuttle booster. Volume 4: Cost
NASA Technical Reports Server (NTRS)
1972-01-01
The cost data for solid propellant rocket engines for use with the space shuttle are presented. The data are based on the selected 156 inch parallel and series burn configurations. Summary cost data are provided for the production of the 120 inch and 260 inch configurations. Graphs depicting parametric cost estimating relationships are included.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Sleep analysis for wearable devices applying autoregressive parametric models.
Mendez, M O; Villantieri, O; Bianchi, A; Cerutti, S
2005-01-01
We applied time-variant and time-invariant parametric models in both healthy subjects and patients with sleep disorder recordings in order to assess the skills of those approaches to sleep disorders diagnosis in wearable devices. The recordings present the Obstructive Sleep Apnea (OSA) pathology which is characterized by fluctuations in the heart rate, bradycardia in apneonic phase and tachycardia at the recovery of ventilation. Data come from a web database in www.physionet.org. During OSA the spectral indexes obtained by time-variant lattice filters presented oscillations that correspond to the changes brady-tachycardia of the RR intervals and greater values than healthy ones. Multivariate autoregressive models showed an increment in very low frequency component (PVLF) at each apneic event. Also a rise in high frequency component (PHF) occurred over the breathing restore in the spectrum of both quadratic coherence and cross-spectrum in OSA. These autoregressive parametric approaches could help in the diagnosis of Sleep Disorder inside of the wearable devices.
NASA's X-Plane Database and Parametric Cost Model v 2.0
NASA Technical Reports Server (NTRS)
Sterk, Steve; Ogluin, Anthony; Greenberg, Marc
2016-01-01
The NASA Armstrong Cost Engineering Team with technical assistance from NASA HQ (SID)has gone through the full process in developing new CERs from Version #1 to Version #2 CERs. We took a step backward and reexamined all of the data collected, such as dependent and independent variables, cost, dry weight, length, wingspan, manned versus unmanned, altitude, Mach number, thrust, and skin. We used a well- known statistical analysis tool called CO$TAT instead of using "R" multiple linear or the "Regression" tool found in Microsoft Excel(TradeMark). We setup an "array of data" by adding 21" dummy variables;" we analyzed the standard error (SE) and then determined the "best fit." We have parametrically priced-out several future X-planes and compared our results to those of other resources. More work needs to be done in getting "accurate and traceable cost data" from historical X-plane records!
Creating A Data Base For Design Of An Impeller
NASA Technical Reports Server (NTRS)
Prueger, George H.; Chen, Wei-Chung
1993-01-01
Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Cost and efficiency of disaster waste disposal: A case study of the Great East Japan Earthquake.
Sasao, Toshiaki
2016-12-01
This paper analyzes the cost and efficiency of waste disposal associated with the Great East Japan Earthquake. The following two analyses were performed: (1) a popular parametric approach, which is an ordinary least squares (OLS) method to estimate the factors that affect the disposal costs; (2) a non-parametric approach, which is a two-stage data envelopment analysis (DEA) to analyze the efficiency of each municipality and clarify the best performance of the disaster waste management. Our results indicate that a higher recycling rate of disaster waste and a larger amount of tsunami sediments decrease the average disposal costs. Our results also indicate that area-wide management increases the average cost. In addition, the efficiency scores were observed to vary widely by municipality, and more temporary incinerators and secondary waste stocks improve the efficiency scores. However, it is likely that the radioactive contamination from the Fukushima Daiichi nuclear power station influenced the results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ronald E. McRoberts; Grant M. Domke; Qi Chen; Erik Næsset; Terje Gobakken
2016-01-01
The relatively small sampling intensities used by national forest inventories are often insufficient to produce the desired precision for estimates of population parameters unless the estimation process is augmented with auxiliary information, usually in the form of remotely sensed data. The k-Nearest Neighbors (k-NN) technique is a non-parametric,multivariate approach...
Ronald E. McRoberts; Erkki O. Tomppo; Andrew O. Finley; Heikkinen Juha
2007-01-01
The k-Nearest Neighbor (k-NN) technique has become extremely popular for a variety of forest inventory mapping and estimation applications. Much of this popularity may be attributed to the non-parametric, multivariate features of the technique, its intuitiveness, and its ease of use. When used with satellite imagery and forest...
Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne
2018-01-01
ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
NASA Astrophysics Data System (ADS)
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
Phase noise suppression through parametric filtering
NASA Astrophysics Data System (ADS)
Cassella, Cristian; Strachan, Scott; Shaw, Steven W.; Piazza, Gianluca
2017-02-01
In this work, we introduce and experimentally demonstrate a parametric phase noise suppression technique, which we call "parametric phase noise filtering." This technique is based on the use of a solid-state parametric amplifier operating in its instability region and included in a non-autonomous feedback loop connected at the output of a noisy oscillator. We demonstrate that such a system behaves as a parametrically driven Duffing resonator and can operate at special points where it becomes largely immune to the phase fluctuations that affect the oscillator output signal. A prototype of a parametric phase noise filter (PFIL) was designed and fabricated to operate in the very-high-frequency range. The PFIL prototype allowed us to significantly reduce the phase noise at the output of a commercial signal generator operating around 220 MHz. Noise reduction of 16 dB (40×) and 13 dB (20×) were obtained, respectively, at 1 and 10 kHz offsets from the carrier frequency. The demonstration of this phase noise suppression technique opens up scenarios in the development of passive and low-cost phase noise cancellation circuits for any application demanding high quality frequency generation.
Conceptual design of reduced energy transports
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.
1975-01-01
This paper reports the results of a conceptual design study of new, near-term fuel-conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the 'optimum' configuration characteristics and on economic performance. Supercritical wing technology and advanced engine cycles were assumed. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It yields about 30% more seat-miles/gal than current wide-body aircraft. At the higher fuel costs anticipated in the future, the reduced energy design has about the same economic performance as existing designs.
NASA Technical Reports Server (NTRS)
Staigner, P. J.; Abbott, J. M.
1980-01-01
Two parallel contracted studies were conducted. Each contractor investigated three base cases and parametric variations about these base cases. Each contractor concluded that two of the base cases (a plant using separate firing of an advanced high temperature regenerative air heater with fuel from an advanced coal gasifier and a plant using an intermediate temperature metallic recuperative heat exchanger to heat oxygen enriched combustion air) were comparable in both performance and cost of electricity. The contractors differed in the level of their cost estimates with the capital cost estimates for the MHD topping cycle and the magnet subsystem in particular accounting for a significant part of the difference. The impact of the study on the decision to pursue a course which leads to an oxygen enriched plant as the first commercial MHD plant is described.
Facilitating the Transition from Bright to Dim Environments
2016-03-04
For the parametric data, a multivariate ANOVA was used in determining the systematic presence of any statistically significant performance differences...performed. All significance levels were p < 0.05, and statistical analyses were performed with the Statistical Package for Social Sciences ( SPSS ...1950. Age changes in rate and level of visual dark adaptation. Journal of Applied Physiology, 2, 407–411. Field, A. 2009. Discovering statistics
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
NASA Technical Reports Server (NTRS)
1972-01-01
Mission analysis is discussed, including the consolidation and expansion of mission equipment and experiment characteristics, and determination of simplified shuttle flight schedule. Parametric analysis of standard space hardware and preliminary shuttle/payload constraints analysis are evaluated, along with the cost impact of low cost standard hardware.
Cost Estimation of Naval Ship Acquisition.
1983-12-01
one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
Paradis, Pierre Emmanuel; Latrémouille-Viau, Dominick; Moore, Yuliya; Mishagina, Natalia; Lafeuille, Marie-Hélène; Lefebvre, Patrick; Gaudig, Maren; Duh, Mei Sheng
2009-07-01
To explore the effects of generic substitution of the antiepileptic drug (AED) topiramate (Topamax) in Canada; to convert observed Canadian costs into the settings of France, Germany, Italy, and the United Kingdom (UK); and to forecast the economic impact of generic topiramate entry in these four European countries. Health claims from Régie de l'assurance maladie du Québec (RAMQ) plan (1/2006-9/2008) and IMS Health data (1998-2008) were used. Patients with epilepsy and > or = 2 topiramate dispensings were selected. An open-cohort design was used to classify observation into mutually-exclusive periods of branded versus generic use of topiramate. Canadian healthcare utilization and costs (2007 CAN$/person-year) were compared between periods using multivariate models. Annualized per-patient costs (2007 euro or 2007 pound sterling/person-year) were converted using Canadian utilization rates, European prices and service-use ratios. Non-parametric bootstrap served to assess statistical significance of cost differences. Topiramate market was forecasted following generic entry (09/2009-09/2010) using autoregressive models based on the European experience. The economic impact of generic topiramate entry was estimated for each country. A total of 1164 patients (mean age: 39.8 years, 61.7% female) were observed for 2.6 years on average. After covariates adjustment, generic-use periods were associated with increased pharmacy dispensings (other AEDs: +0.95/person-year, non-AEDs: +12.28/person-year, p < 0.001), hospitalizations ( + 0.08/person-year, p = 0.015), and lengths of hospital stays (+0.51 days/person-year, p < 0.001). Adjusted costs, excluding topiramate, were CAN$1060/person-year higher during generic use (p = 0.005). Converted per-patient costs excluding topiramate were significantly higher for generic relative to brand periods in all European countries (adjusted cost differences per person-year: 706-815 euro, p < 0.001 for all comparisons). System-wide costs would increase from 3.5 to 24.4% one year after generic entry. Study limitations include the absence of indirect costs, possible claim inaccuracies, and IMS data limitations. Higher health costs were projected for G4 European countries from the Canadian experience following the generic entry of topiramate.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
NASA/Air Force Cost Model: NAFCOM
NASA Technical Reports Server (NTRS)
Winn, Sharon D.; Hamcher, John W. (Technical Monitor)
2002-01-01
The NASA/Air Force Cost Model (NAFCOM) is a parametric estimating tool for space hardware. It is based on historical NASA and Air Force space projects and is primarily used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Preliminary design study of advanced multistage axial flow core compressors
NASA Technical Reports Server (NTRS)
Wisler, D. C.; Koch, C. C.; Smith, L. H., Jr.
1977-01-01
A preliminary design study was conducted to identify an advanced core compressor for use in new high-bypass-ratio turbofan engines to be introduced into commercial service in the 1980's. An evaluation of anticipated compressor and related component 1985 state-of-the-art technology was conducted. A parametric screening study covering a large number of compressor designs was conducted to determine the influence of the major compressor design features on efficiency, weight, cost, blade life, aircraft direct operating cost, and fuel usage. The trends observed in the parametric screening study were used to develop three high-efficiency, high-economic-payoff compressor designs. These three compressors were studied in greater detail to better evaluate their aerodynamic and mechanical feasibility.
Cost Modeling for Space Optical Telescope Assemblies
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.
Thin-Film Photovoltaic Solar Array Parametric Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Kerslake, Thomas W.; Hepp, Aloysius F.; Jacobs, Mark K.; Ponnusamy, Deva
2000-01-01
This paper summarizes a study that had the objective to develop a model and parametrically determine the circumstances for which lightweight thin-film photovoltaic solar arrays would be more beneficial, in terms of mass and cost, than arrays using high-efficiency crystalline solar cells. Previous studies considering arrays with near-term thin-film technology for Earth orbiting applications are briefly reviewed. The present study uses a parametric approach that evaluated the performance of lightweight thin-film arrays with cell efficiencies ranging from 5 to 20 percent. The model developed for this study is described in some detail. Similar mass and cost trends for each array option were found across eight missions of various power levels in locations ranging from Venus to Jupiter. The results for one specific mission, a main belt asteroid tour, indicate that only moderate thin-film cell efficiency (approx. 12 percent) is necessary to match the mass of arrays using crystalline cells with much greater efficiency (35 percent multi-junction GaAs based and 20 percent thin-silicon). Regarding cost, a 12 percent efficient thin-film array is projected to cost about half is much as a 4-junction GaAs array. While efficiency improvements beyond 12 percent did not significantly further improve the mass and cost benefits for thin-film arrays, higher efficiency will be needed to mitigate the spacecraft-level impacts associated with large deployed array areas. A low-temperature approach to depositing thin-film cells on lightweight, flexible plastic substrates is briefly described. The paper concludes with the observation that with the characteristics assumed for this study, ultra-lightweight arrays using efficient, thin-film cells on flexible substrates may become a leading alternative for a wide variety of space missions.
A program for the Bayesian Neural Network in the ROOT framework
NASA Astrophysics Data System (ADS)
Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang
2011-12-01
We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The optimum hardware miniaturization level with the lowest cost impact for space biology hardware was determined. Space biology hardware and/or components/subassemblies/assemblies which are the most likely candidates for application of miniaturization are to be defined and relative cost impacts of such miniaturization are to be analyzed. A mathematical or statistical analysis method with the capability to support development of parametric cost analysis impacts for levels of production design miniaturization are provided.
1994-09-01
Institute of Technology, Wright- Patterson AFB OH, January 1994. 4. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 5...Technology, Wright-Patterson AFB OH 5 April 1994. 29. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 30. Office of
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach
NASA Astrophysics Data System (ADS)
Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.
2017-07-01
The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.
NASA Technical Reports Server (NTRS)
Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)
2002-01-01
This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.
Bayesian multivariate hierarchical transformation models for ROC analysis.
O'Malley, A James; Zou, Kelly H
2006-02-15
A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.
Bayesian multivariate hierarchical transformation models for ROC analysis
O'Malley, A. James; Zou, Kelly H.
2006-01-01
SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836
Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A
2016-08-30
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Brain Signal Variability is Parametrically Modifiable
Garrett, Douglas D.; McIntosh, Anthony R.; Grady, Cheryl L.
2014-01-01
Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. PMID:23749875
Cost Estimation and Control for Flight Systems
NASA Technical Reports Server (NTRS)
Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)
2002-01-01
Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.
Orbit transfer vehicle engine study. Volume 2: Technical report
NASA Technical Reports Server (NTRS)
1980-01-01
The orbit transfer vehicle (OTV) engine study provided parametric performance, engine programmatic, and cost data on the complete propulsive spectrum that is available for a variety of high energy, space maneuvering missions. Candidate OTV engines from the near term RL 10 (and its derivatives) to advanced high performance expander and staged combustion cycle engines were examined. The RL 10/RL 10 derivative performance, cost and schedule data were updated and provisions defined which would be necessary to accommodate extended low thrust operation. Parametric performance, weight, envelope, and cost data were generated for advanced expander and staged combustion OTV engine concepts. A prepoint design study was conducted to optimize thrust chamber geometry and cooling, engine cycle variations, and controls for an advanced expander engine. Operation at low thrust was defined for the advanced expander engine and the feasibility and design impact of kitting was investigated. An analysis of crew safety and mission reliability was conducted for both the staged combustion and advanced expander OTV engine candidates.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
NASA Astrophysics Data System (ADS)
Yuan, X.; Wang, L.; Zhang, M.
2017-12-01
Rainfall deficit in the crop growing seasons is usually accompanied by heat waves. Abnormally high temperature increases evapotranspiration and decreases soil moisture rapidly, and ultimately results in a type of drought with a rapid onset, short duration but devastating impact, which is called "Flash drought". With the increase in global temperature, flash drought is expected to occur more frequently. However, there is no consensus on the definition of flash drought so far. Moreover, large uncertainty exists in the estimation of the flash drought and its trend, and the underlying mechanism for its long-term change is not clear. In this presentation, a parametric multivariate drought index that characterizes the joint probability distribution of key variables of flash drought will be developed, and the historical changes in flash drought over China will be analyzed. In addition, a set of land surface model simulations driven by IPCC CMIP5 models with different forcings and future scenarios, will be used for the detection and attribution of flash drought change. This study is targeted at quantifying the influences of natural and anthropogenic climate change on the flash drought change, projecting its future change as well as the corresponding uncertainty, and improving our understanding of the variation of flash drought and its underlying mechanism in a changing climate.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Predicting Production Costs for Advanced Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.; Weston, R. P.
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This paper outlines the development of a process-based cost model in which the physical elements of the vehicle are soared according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this paper is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool.
Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien
2012-01-01
Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.
Parametric Cost and Schedule Modeling for Early Technology Development
2018-04-02
Best Paper in the Analysis Methods Category and 2017 Best Paper Overall awards. It was also presented at the 2017 NASA Cost and Schedule Symposium... Methods over the Project Life Cycle .............................................................................................. 2 Figure 2. Average...information contribute to the lack of data, objective models, and methods that can be broadly applied in early planning stages. Scientific
The report gives results of a recent analysis showing that cost- effective indoor radon reduction technology is required for houses with initial radon concentrations < 4 pCi/L, because 78-86% of the national lung cancer risk due to radon is associated with those houses. ctive soi...
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
Robust Machine Learning Variable Importance Analyses of Medical Conditions for Health Care Spending.
Rose, Sherri
2018-03-11
To propose nonparametric double robust machine learning in variable importance analyses of medical conditions for health spending. 2011-2012 Truven MarketScan database. I evaluate how much more, on average, commercially insured enrollees with each of 26 of the most prevalent medical conditions cost per year after controlling for demographics and other medical conditions. This is accomplished within the nonparametric targeted learning framework, which incorporates ensemble machine learning. Previous literature studying the impact of medical conditions on health care spending has almost exclusively focused on parametric risk adjustment; thus, I compare my approach to parametric regression. My results demonstrate that multiple sclerosis, congestive heart failure, severe cancers, major depression and bipolar disorders, and chronic hepatitis are the most costly medical conditions on average per individual. These findings differed from those obtained using parametric regression. The literature may be underestimating the spending contributions of several medical conditions, which is a potentially critical oversight. If current methods are not capturing the true incremental effect of medical conditions, undesirable incentives related to care may remain. Further work is needed to directly study these issues in the context of federal formulas. © Health Research and Educational Trust.
Multivariate Regression Analysis and Slaughter Livestock,
AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY
Guidance, navigation, and control trades for an Electric Orbit Transfer Vehicle
NASA Astrophysics Data System (ADS)
Zondervan, K. P.; Bauer, T. A.; Jenkin, A. B.; Metzler, R. A.; Shieh, R. A.
The USAF Space Division initiated the Electric Insertion Transfer Experiment (ELITE) in the fall of 1988. The ELITE space mission is planned for the mid 1990s and will demonstrate technological readiness for the development of operational solar-powered electric orbit transfer vehicles (EOTVs). To minimize the cost of ground operations, autonomous flight is desirable. Thus, the guidance, navigation, and control (GNC) functions of an EOTV should reside on board. In order to define GNC requirements for ELITE, parametric trades must be performed for an operational solar-powered EOTV so that a clearer understanding of the performance aspects is obtained. Parametric trades for the GNC subsystems have provided insight into the relationship between pointing accuracy, transfer time, and propellant utilization. Additional trades need to be performed, taking into account weight, cost, and degree of autonomy.
Astronomy sortie mission definition study. Addendum: Follow-on analyses
NASA Technical Reports Server (NTRS)
1973-01-01
Results of design analyses, trade studies, and planning data of the Astronomy Sortie Mission Definition Study are presented. An in-depth analysis of UV instruments, nondeployed solar payload, and on-orbit access is presented. Planning data are considered, including the cost and schedules associated with the astronomy instruments and/or support hardware. Costs are presented in a parametric fashion.
Parametric versus Cox's model: an illustrative analysis of divorce in Canada.
Balakrishnan, T R; Rao, K V; Krotki, K J; Lapierre-adamcyk, E
1988-06-01
Recent demographic literature clearly recognizes the importance of survival modes in the analysis of cross-sectional event histories. Of the various survival models, Cox's (1972) partial parametric model has been very popular due to its simplicity, and readily available computer software for estimation, sometimes at the cost of precision and parsimony of the model. This paper focuses on parametric failure time models for event history analysis such as Weibell, lognormal, loglogistic, and exponential models. The authors also test the goodness of fit of these parametric models versus the Cox's proportional hazards model taking Kaplan-Meier estimate as base. As an illustration, the authors reanalyze the Canadian Fertility Survey data on 1st marriage dissolution with parametric models. Though these parametric model estimates were not very different from each other, there seemed to be a slightly better fit with loglogistic. When 8 covariates were used in the analysis, it was found that the coefficients were similar in the models, and the overall conclusions about the relative risks would not have been different. The findings reveal that in marriage dissolution, the differences according to demographic and socioeconomic characteristics may be far more important than is generally found in many studies. Therefore, one should not treat the population as homogeneous in analyzing survival probabilities of marriages, other than for cursory analysis of overall trends.
Yu, Hao; Dick, Andrew W
2012-10-01
Given the rapid growth of health care costs, some experts were concerned with erosion of employment-based private insurance (EBPI). This empirical analysis aims to quantify the concern. Using the National Health Account, we generated a cost index to represent state-level annual cost growth. We merged it with the 1996-2003 Medical Expenditure Panel Survey. The unit of analysis is the family. We conducted both bivariate and multivariate logistic analyses. The bivariate analysis found a significant inverse association between the cost index and the proportion of families receiving an offer of EBPI. The multivariate analysis showed that the cost index was significantly negatively associated with the likelihood of receiving an EBPI offer for the entire sample and for families in the first, second, and third quartiles of income distribution. The cost index was also significantly negatively associated with the proportion of families with EBPI for the entire year for each family member (EBPI-EYEM). The multivariate analysis confirmed significance of the relationship for the entire sample, and for families in the second and third quartiles of income distribution. Among the families with EBPI-EYEM, there was a positive relationship between the cost index and this group's likelihood of having out-of-pocket expenditures exceeding 10 percent of family income. The multivariate analysis confirmed significance of the relationship for the entire group and for families in the second and third quartiles of income distribution. Rising health costs reduce EBPI availability and enrollment, and the financial protection provided by it, especially for middle-class families. © Health Research and Educational Trust.
Yu, Hao; Dick, Andrew W
2012-01-01
Background Given the rapid growth of health care costs, some experts were concerned with erosion of employment-based private insurance (EBPI). This empirical analysis aims to quantify the concern. Methods Using the National Health Account, we generated a cost index to represent state-level annual cost growth. We merged it with the 1996–2003 Medical Expenditure Panel Survey. The unit of analysis is the family. We conducted both bivariate and multivariate logistic analyses. Results The bivariate analysis found a significant inverse association between the cost index and the proportion of families receiving an offer of EBPI. The multivariate analysis showed that the cost index was significantly negatively associated with the likelihood of receiving an EBPI offer for the entire sample and for families in the first, second, and third quartiles of income distribution. The cost index was also significantly negatively associated with the proportion of families with EBPI for the entire year for each family member (EBPI-EYEM). The multivariate analysis confirmed significance of the relationship for the entire sample, and for families in the second and third quartiles of income distribution. Among the families with EBPI-EYEM, there was a positive relationship between the cost index and this group's likelihood of having out-of-pocket expenditures exceeding 10 percent of family income. The multivariate analysis confirmed significance of the relationship for the entire group and for families in the second and third quartiles of income distribution. Conclusions Rising health costs reduce EBPI availability and enrollment, and the financial protection provided by it, especially for middle-class families. PMID:22417314
NASA Air Force Cost Model (NAFCOM): Capabilities and Results
NASA Technical Reports Server (NTRS)
McAfee, Julie; Culver, George; Naderi, Mahmoud
2011-01-01
NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
NASA Technical Reports Server (NTRS)
Leininger, G.; Jutila, S.; King, J.; Muraco, W.; Hansell, J.; Lindeen, J.; Franckowiak, E.; Flaschner, A.
1975-01-01
Appendices are presented which include discussions of interest formulas, factors in regionalization, parametric modeling of discounted benefit-sacrifice streams, engineering economic calculations, and product innovation. For Volume 1, see .
NASA Technical Reports Server (NTRS)
1979-01-01
Cost data generated for the evolutionary power module concepts selected are reported. The initial acquisition costs (design, development, and protoflight unit test costs) were defined and modeled for the baseline 25 kW power module configurations. By building a parametric model of this initial building block, the cost of the 50 kW and the 100 kW power modules were derived by defining only their configuration and programmatic differences from the 25 kW baseline module. Variations in cost for the quantities needed to fulfill the mission scenarios were derived by applying appropriate learning curves.
Schnitzler, Mark A; Johnston, Karissa; Axelrod, David; Gheorghian, Adrian; Lentine, Krista L
2011-06-27
Improved early kidney transplant outcomes limit the contemporary utility of standard clinical endpoints. Quantifying the relationship of renal function at 1 year after transplant with subsequent clinical outcomes and healthcare costs may facilitate cost-benefit evaluations among transplant recipients. Data for Medicare-insured kidney-only transplant recipients (1995-2003) were drawn from the United States Renal Data System. Associations of estimated glomerular filtration rate (eGFR) level at the first transplant anniversary with subsequent death-censored graft failure and patient death in posttransplant years 1 to 3 and 4 to 7 were examined by parametric survival analysis. Associations of eGFR with total health care costs defined by Medicare payments were assessed with multivariate linear regression. Among 38,015 participants, first anniversary eGFR level demonstrated graded associations with subsequent outcomes. Compared with patients with 12-month eGFR more than or equal to 60 mL/min/1.73 m, the adjusted relative risk of death-censored graft failure in years 1 to 3 was 31% greater for eGFR 45 to 59 mL/min/1.73 m (P<0.0001) and 622% greater for eGFR 15 to 30 mL/min/1.73 m (P<0.0001). Associations of first anniversary eGFR level with graft failure and mortality remained significant in years 4 to 7. The proportions of recipients expected to return to dialysis or die attributable to eGFR less than 60 mL/min/1.73 m over 10 years were 23.1% and 9.4%, respectively, and were significantly higher than proportions attributable to delayed graft function or acute rejection. Reduced eGFR was associated with graded and significant increases in health care spending during years 2 and 3 after transplant (P<0.0001). eGFR is strongly associated with clinical and economic outcomes after kidney transplantation.
Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective
NASA Astrophysics Data System (ADS)
Hamaker, Joseph W.
1996-03-01
This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.
Gas engine heat pump cycle analysis. Volume 1: Model description and generic analysis
NASA Astrophysics Data System (ADS)
Fischer, R. D.
1986-10-01
The task has prepared performance and cost information to assist in evaluating the selection of high voltage alternating current components, values for component design variables, and system configurations and operating strategy. A steady-state computer model for performance simulation of engine-driven and electrically driven heat pumps was prepared and effectively used for parametric and seasonal performance analyses. Parametric analysis showed the effect of variables associated with design of recuperators, brine coils, domestic hot water heat exchanger, compressor size, engine efficiency, insulation on exhaust and brine piping. Seasonal performance data were prepared for residential and commercial units in six cities with system configurations closely related to existing or contemplated hardware of the five GRI engine contractors. Similar data were prepared for an advanced variable-speed electric unit for comparison purposes. The effect of domestic hot water production on operating costs was determined. Four fan-operating strategies and two brine loop configurations were explored.
Application of selection and estimation regular vine copula on go public company share
NASA Astrophysics Data System (ADS)
Hasna Afifah, R.; Noviyanti, Lienda; Bachrudin, Achmad
2018-03-01
The accuracy of financial risk management involving a large number of assets is needed, but information about dependencies among assets cannot be adequately analyzed. To analyze dependencies on a number of assets, several tools have been added to standard multivariate copula. However, these tools have not been adequately used in apps with higher dimensions. The bivariate parametric copula families can be used to solve it. The multivariate copula can be built from the bivariate parametric copula which is connected by a graphical representation to become Pair Copula Constructions (PCCs) or vine copula. The application of C-vine and D-vine copula have been used in some researches, but the use of C-vine and D-vine copula is more limited than R-vine copula. Therefore, this study used R-vine copula to provide flexibility for modeling complex dependencies on a high dimension. Since copula is a static model, while stock values change over time, then copula should be combined with the ARMA- GARCH model for modeling the movement of shares (volatility). The objective of this paper is to select and estimate R-vine copula which is used to analyze PT Jasa Marga (Persero) Tbk (JSMR), PT Waskita Karya (Persero) Tbk (WSKT), and PT Bank Mandiri (Persero) Tbk (BMRI) from august 31, 2014 to august 31, 2017. From the method it is obtained that the selected copulas for 2 edges at the first tree are survival Gumbel and the copula for edge at the second tree is Gaussian.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.
1980-01-01
Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.
Kral, L
2007-05-01
We present a complex stabilization and control system for a commercially available optical parametric oscillator. The system is able to stabilize the oscillator's output wavelength at a narrow spectral line of atomic iodine with subpicometer precision, allowing utilization of this solid-state parametric oscillator as a front end of a high-power photodissociation laser chain formed by iodine gas amplifiers. In such setup, a precise wavelength matching between the front end and the amplifier chain is necessary due to extremely narrow spectral lines of the gaseous iodine (approximately 20 pm). The system is based on a personal computer, a heated iodine cell, and a few other low-cost components. It automatically identifies the proper peak within the iodine absorption spectrum, and then keeps the oscillator tuned to this peak with high precision and reliability. The use of the solid-state oscillator as the front end allows us to use the whole iodine laser system as a pump laser for the optical parametric chirped pulse amplification, as it enables precise time synchronization with a signal Ti:sapphire laser.
Parametric study of potential early commercial power plants Task 3-A MHD cost analysis
NASA Technical Reports Server (NTRS)
1983-01-01
The development of costs for an MHD Power Plant and the comparison of these costs to a conventional coal fired power plant are reported. The program is divided into three activities: (1) code of accounts review; (2) MHD pulverized coal power plant cost comparison; (3) operating and maintenance cost estimates. The scope of each NASA code of account item was defined to assure that the recently completed Task 3 capital cost estimates are consistent with the code of account scope. Improvement confidence in MHD plant capital cost estimates by identifying comparability with conventional pulverized coal fired (PCF) power plant systems is undertaken. The basis for estimating the MHD plant operating and maintenance costs of electricity is verified.
Structural cost optimization of photovoltaic central power station modules and support structure
NASA Technical Reports Server (NTRS)
Sutton, P. D.; Stolte, W. J.; Marsh, R. O.
1979-01-01
The results of a comprehensive study of photovoltaic module structural support concepts for photovoltaic central power stations and their associated costs are presented. The objective of the study has been the identification of structural cost drivers. Parametric structural design and cost analyses of complete array systems consisting of modules, primary support structures, and foundations were performed. Area related module cost was found to be constant with design, size, and loading. A curved glass module concept was evaluated and found to have the potential to significantly reduce panel structural costs. Conclusions of the study are: array costs do not vary greatly among the designs evaluated; panel and array costs are strongly dependent on design loading; and the best support configuration is load dependent
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
NASA Astrophysics Data System (ADS)
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies. Seasonal and daily variability of latent heat fluxes were also discussed.
Space biology initiative program definition review. Trade study 4: Design modularity and commonality
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The relative cost impacts (up or down) of developing Space Biology hardware using design modularity and commonality is studied. Recommendations for how the hardware development should be accomplished to meet optimum design modularity requirements for Life Science investigation hardware will be provided. In addition, the relative cost impacts of implementing commonality of hardware for all Space Biology hardware are defined. Cost analysis and supporting recommendations for levels of modularity and commonality are presented. A mathematical or statistical cost analysis method with the capability to support development of production design modularity and commonality impacts to parametric cost analysis is provided.
Integrated-circuit balanced parametric amplifier
NASA Technical Reports Server (NTRS)
Dickens, L. E.
1975-01-01
Amplifier, fabricated on single dielectric substrate, has pair of Schottky barrier varactor diodes mounted on single semiconductor chip. Circuit includes microstrip transmission line and slot line section to conduct signals. Main features of amplifier are reduced noise output and low production cost.
Integral abutment bridges under thermal loading : numerical simulations and parametric study.
DOT National Transportation Integrated Search
2016-06-01
Integral abutment bridges (IABs) have become of interest due to their decreased construction and maintenance costs in : comparison to conventional jointed bridges. Most prior IAB research was related to substructure behavior, and, as a result, most :...
A 5-year perspective over robotic general surgery: indications, risk factors and learning curves.
Sgarbură, O; Tomulescu, V; Blajut, C; Popescu, I
2013-01-01
Robotic surgery has opened a new era in several specialties but the diffusion of medical innovation is slower indigestive surgery than in urology due to considerations related to cost and cost-efficiency. Studies often discuss the launching of the robotic program as well as the technical or clinical data related to specific procedures but there are very few articles evaluating already existing robotic programs. The aims of the present study are to evaluate the results of a five-year robotic program and to assess the evolution of indications in a center with expertise in a wide range of thoracic and abdominal robotic surgery. All consecutive robotic surgery cases performed in our center since the beginning of the program and prior to the 31st of December 2012 were included in this study, summing up to 734 cases throughout five years of experience in the field. Demographic, clinical, surgical and postoperative variables were recorded and analyzed.Comparative parametric and non-parametric tests, univariate and multivariate analyses and CUSUM analysis were performed. In this group, the average age was 50,31 years. There were 60,9% females and 39,1% males. 55,3% of all interventions were indicated for oncological disease. 36% of all cases of either benign or malignant etiology were pelvic conditions whilst 15,4% were esogastric conditions. Conversion was performed in 18 cases (2,45%). Mean operative time was 179,4Â+-86,06 min. Mean docking time was 11,16Â+-2,82 min.The mean hospital length of stay was 8,54 (Â+-5,1) days. There were 26,2% complications of all Clavien subtypes but important complications (Clavien III-V) only represented 6,2%.Male sex, age over 65 years old, oncological cases and robotic suturing were identified as risk factors for unfavorable outcomes. The present data support the feasibility of different and complex procedures in a general surgery department as well as the ascending evolution of a well-designed and well-conducted robotic program. From the large variety of surgical interventions, we think that a robotic program could be focused on solving oncologic cases and different types of pelvic and gastroesophageal junction conditions, especially rectal, cervical and endometrial cancer, achalasia and complicated or redo hiatal hernia. Celsius.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Parametric study of helicopter aircraft systems costs and weights
NASA Technical Reports Server (NTRS)
Beltramo, M. N.
1980-01-01
Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.
Improving the Parametric Method of Cost Estimating Relationships of Naval Ships
2014-06-01
tool since the total cost of the ship is broken down into smaller parts as defined by the WBS. The Navy currently uses the Expanded Ship Work Breakdown...Includes boilers , reactors, turbines, gears, shafting, propellers, steam piping, lube oil piping, and radiation 300 Electric Plant Includes ship...spaces, ladders, storerooms, laundry, and workshops 700 Armament Includes guns, missile launchers, ammunition handling and stowage, torpedo tubes , depth
Manned Mars mission cost estimate
NASA Technical Reports Server (NTRS)
Hamaker, Joseph; Smith, Keith
1986-01-01
The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.
Cai, Li
2006-02-01
A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.
Scattering amplitudes from multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2012-11-01
We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Gröbner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.
Bayesian Local Contamination Models for Multivariate Outliers
Page, Garritt L.; Dunson, David B.
2013-01-01
In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465
Assessment of benthic changes during 20 years of monitoring the Mexican Salina Cruz Bay.
González-Macías, C; Schifter, I; Lluch-Cota, D B; Méndez-Rodríguez, L; Hernández-Vázquez, S
2009-02-01
In this work a non-parametric multivariate analysis was used to assess the impact of metals and organic compounds in the macro infaunal component of the mollusks benthic community using surface sediment data from several monitoring programs collected over 20 years in Salina Cruz Bay, Mexico. The data for benthic mollusks community characteristics (richness, abundance and diversity) were linked to multivariate environmental patterns, using the Alternating Conditional Expectations method to correlate the biological measurements of the mollusk community with the physicochemical properties of water and sediments. Mollusks community variation is related to environmental characteristics as well as lead content. Surface deposit feeders are increasing their relative density, while subsurface deposit feeders are decreasing with respect to time, these last are expected to be more related with sediment and more affected then by its quality. However gastropods with predatory carnivore as well as chemosymbiotic deposit feeder bivalves have maintained their relative densities along time.
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Pérez, Concepción; Navarro, Ana; Saldaña, María T; Wilson, Koo; Rejas, Javier
2015-03-01
The aim of the present analysis was to model the association and predictive value of pain intensity on cost and resource utilization in patients with chronic peripheral neuropathic pain (PNP) treated in routine clinical practice settings in Spain. We performed a secondary economic analysis based on data from a multicenter, observational, and prospective cost-of-illness study in patients with chronic PNP that is refractory to prior treatment. Pain intensity was measured using the Short-Form McGill Pain Questionnaire. Univariate and multivariate linear regression models were fitted to identify independent predictors of cost and health care/non-health care resource utilization. A total of 1703 patients were included in the current analysis. Pain intensity was an independent predictor of total costs ([total costs]=35.6 [pain intensity]+214.5; coefficient of determination [R(2)]=0.19, P<0.001), direct costs ([direct costs]=10.8 [pain intensity]+257.7; R=0.06, P<0.001), and indirect costs ([indirect costs]=24.8 [pain intensity]-43.4; R(2)=0.20, P<0.001) related to chronic PNP in the univariate analysis. Pain intensity remains significantly associated with total costs, direct costs, and indirect costs after adjustment by other covariates in the multivariate analysis (P<0.001). None of the other variables considered in the multivariate analysis were predictors of resource utilization. Pain intensity predicts the health care and non-health care resource utilization, and costs related to chronic PNP. Management of patients with drugs associated with a higher reduction of pain intensity may have a greater impact on the economic burden of that condition.
Parametric Cost Study of AC-DC Wayside Power Systems
DOT National Transportation Integrated Search
1975-09-01
The wayside power system provides all the power requirements of an electric vehicle operating on a fixed guideway. For a given set of specifications there are numerous wayside power supply configurations which will be satisfactory from a technical st...
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Hirschi, M.; Spirig, C.
2014-12-01
To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup
2011-02-25
Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
Brain signal variability is parametrically modifiable.
Garrett, Douglas D; McIntosh, Anthony R; Grady, Cheryl L
2014-11-01
Moment-to-moment brain signal variability is a ubiquitous neural characteristic, yet remains poorly understood. Evidence indicates that heightened signal variability can index and aid efficient neural function, but it is not known whether signal variability responds to precise levels of environmental demand, or instead whether variability is relatively static. Using multivariate modeling of functional magnetic resonance imaging-based parametric face processing data, we show here that within-person signal variability level responds to incremental adjustments in task difficulty, in a manner entirely distinct from results produced by examining mean brain signals. Using mixed modeling, we also linked parametric modulations in signal variability with modulations in task performance. We found that difficulty-related reductions in signal variability predicted reduced accuracy and longer reaction times within-person; mean signal changes were not predictive. We further probed the various differences between signal variance and signal means by examining all voxels, subjects, and conditions; this analysis of over 2 million data points failed to reveal any notable relations between voxel variances and means. Our results suggest that brain signal variability provides a systematic task-driven signal of interest from which we can understand the dynamic function of the human brain, and in a way that mean signals cannot capture. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hamilton's rule and the causes of social evolution
Bourke, Andrew F. G.
2014-01-01
Hamilton's rule is a central theorem of inclusive fitness (kin selection) theory and predicts that social behaviour evolves under specific combinations of relatedness, benefit and cost. This review provides evidence for Hamilton's rule by presenting novel syntheses of results from two kinds of study in diverse taxa, including cooperatively breeding birds and mammals and eusocial insects. These are, first, studies that empirically parametrize Hamilton's rule in natural populations and, second, comparative phylogenetic analyses of the genetic, life-history and ecological correlates of sociality. Studies parametrizing Hamilton's rule are not rare and demonstrate quantitatively that (i) altruism (net loss of direct fitness) occurs even when sociality is facultative, (ii) in most cases, altruism is under positive selection via indirect fitness benefits that exceed direct fitness costs and (iii) social behaviour commonly generates indirect benefits by enhancing the productivity or survivorship of kin. Comparative phylogenetic analyses show that cooperative breeding and eusociality are promoted by (i) high relatedness and monogamy and, potentially, by (ii) life-history factors facilitating family structure and high benefits of helping and (iii) ecological factors generating low costs of social behaviour. Overall, the focal studies strongly confirm the predictions of Hamilton's rule regarding conditions for social evolution and their causes. PMID:24686934
Hamilton's rule and the causes of social evolution.
Bourke, Andrew F G
2014-05-19
Hamilton's rule is a central theorem of inclusive fitness (kin selection) theory and predicts that social behaviour evolves under specific combinations of relatedness, benefit and cost. This review provides evidence for Hamilton's rule by presenting novel syntheses of results from two kinds of study in diverse taxa, including cooperatively breeding birds and mammals and eusocial insects. These are, first, studies that empirically parametrize Hamilton's rule in natural populations and, second, comparative phylogenetic analyses of the genetic, life-history and ecological correlates of sociality. Studies parametrizing Hamilton's rule are not rare and demonstrate quantitatively that (i) altruism (net loss of direct fitness) occurs even when sociality is facultative, (ii) in most cases, altruism is under positive selection via indirect fitness benefits that exceed direct fitness costs and (iii) social behaviour commonly generates indirect benefits by enhancing the productivity or survivorship of kin. Comparative phylogenetic analyses show that cooperative breeding and eusociality are promoted by (i) high relatedness and monogamy and, potentially, by (ii) life-history factors facilitating family structure and high benefits of helping and (iii) ecological factors generating low costs of social behaviour. Overall, the focal studies strongly confirm the predictions of Hamilton's rule regarding conditions for social evolution and their causes.
Computer aided system for parametric design of combination die
NASA Astrophysics Data System (ADS)
Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.
2017-09-01
In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Latest NASA Instrument Cost Model (NICM): Version VI
NASA Technical Reports Server (NTRS)
Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary
2014-01-01
The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
NASA Technical Reports Server (NTRS)
Shaw, Eric J.
2001-01-01
This paper will report on the activities of the IAA Launcher Systems Economics Working Group in preparations for its Launcher Systems Development Cost Behavior Study. The Study goals include: improve launcher system and other space system parametric cost analysis accuracy; improve launcher system and other space system cost analysis credibility; and provide launcher system and technology development program managers and other decisionmakers with useful information on development cost impacts of their decisions. The Working Group plans to explore at least the following five areas in the Study: define and explain development cost behavior terms and concepts for use in the Study; identify and quantify sources of development cost and cost estimating uncertainty; identify and quantify significant influences on development cost behavior; identify common barriers to development cost understanding and reduction; and recommend practical, realistic strategies to accomplish reductions in launcher system development cost.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Prospects for reduced energy transports: A preliminary analysis
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.
1974-01-01
The recent energy crisis and subsequent substantial increase in fuel prices have provided increased incentive to reduce the fuel consumption of civil transport aircraft. At the present time many changes in operational procedures have been introduced to decrease fuel consumption of the existing fleet. In the future, however, it may become desirable or even necessary to introduce new fuel-conservative aircraft designs. This paper reports the results of a preliminary study of new near-term fuel conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the optimum configuration characteristics and on economic performance. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a nominal reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It has about 30% less fuel consumption on a seat-mile basis.
Bayesian multivariate Poisson abundance models for T-cell receptor data.
Greene, Joshua; Birtwistle, Marc R; Ignatowicz, Leszek; Rempala, Grzegorz A
2013-06-07
A major feature of an adaptive immune system is its ability to generate B- and T-cell clones capable of recognizing and neutralizing specific antigens. These clones recognize antigens with the help of the surface molecules, called antigen receptors, acquired individually during the clonal development process. In order to ensure a response to a broad range of antigens, the number of different receptor molecules is extremely large, resulting in a huge clonal diversity of both B- and T-cell receptor populations and making their experimental comparisons statistically challenging. To facilitate such comparisons, we propose a flexible parametric model of multivariate count data and illustrate its use in a simultaneous analysis of multiple antigen receptor populations derived from mammalian T-cells. The model relies on a representation of the observed receptor counts as a multivariate Poisson abundance mixture (m PAM). A Bayesian parameter fitting procedure is proposed, based on the complete posterior likelihood, rather than the conditional one used typically in similar settings. The new procedure is shown to be considerably more efficient than its conditional counterpart (as measured by the Fisher information) in the regions of m PAM parameter space relevant to model T-cell data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Network structure of multivariate time series.
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-21
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Air Brayton Solar Receiver, phase 1
NASA Technical Reports Server (NTRS)
Zimmerman, D. K.
1979-01-01
A six month analysis and conceptual design study of an open cycle Air Brayton Solar Receiver (ABSR) for use on a tracking, parabolic solar concentrator are discussed. The ABSR, which includes a buffer storage system, is designed to provide inlet air to a power conversion unit. Parametric analyses, conceptual design, interface requirements, and production cost estimates are described. The design features were optimized to yield a zero maintenance, low cost, high efficiency concept that will provide a 30 year operational life.
Optimization of space manufacturing systems
NASA Technical Reports Server (NTRS)
Akin, D. L.
1979-01-01
Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.
Personality traits of a group of young adults from different family structures.
Du Toit, J; Nel, E M; Steel, H R
1992-07-01
The impact of parental divorce and remarriage and young adults' gender on second-order personality traits, such as extraversion, anxiety, tough poise and independence, was examined. The responses of 227 young adults on the Sixteen Personality Factor Questionnaire (16PF; Cattell, Eber, & Tatsuoka, 1970) were subjected to a parametric multivariate analysis of variance. Results revealed significant differences between the anxiety scores of the young men and women as well as between those of the three different family-structure groups, but divorce and remarriage was not associated with either positive or negative personality development in this sample.
R-parametrization and its role in classification of linear multivariable feedback systems
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.
1988-01-01
A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.
Melchior, Maria; Touchette, Évelyne; Prokofyeva, Elena; Chollet, Aude; Fombonne, Eric; Elidemir, Gulizar; Galéra, Cédric
2014-01-01
Background Common negative events can precipitate the onset of internalizing symptoms. We studied whether their occurrence in childhood is associated with mental health trajectories over the course of development. Methods Using data from the TEMPO study, a French community-based cohort study of youths, we studied the association between negative events in 1991 (when participants were aged 4–16 years) and internalizing symptoms, assessed by the ASEBA family of instruments in 1991, 1999, and 2009 (n = 1503). Participants' trajectories of internalizing symptoms were estimated with semi-parametric regression methods (PROC TRAJ). Data were analyzed using multinomial regression models controlled for participants' sex, age, parental family status, socio-economic position, and parental history of depression. Results Negative childhood events were associated with an increased likelihood of concurrent internalizing symptoms which sometimes persisted into adulthood (multivariate ORs associated with > = 3 negative events respectively: high and decreasing internalizing symptoms: 5.54, 95% CI: 3.20–9.58; persistently high internalizing symptoms: 8.94, 95% CI: 2.82–28.31). Specific negative events most strongly associated with youths' persistent internalizing symptoms included: school difficulties (multivariate OR: 5.31, 95% CI: 2.24–12.59), parental stress (multivariate OR: 4.69, 95% CI: 2.02–10.87), serious illness/health problems (multivariate OR: 4.13, 95% CI: 1.76–9.70), and social isolation (multivariate OR: 2.24, 95% CI: 1.00–5.08). Conclusions Common negative events can contribute to the onset of children's lasting psychological difficulties. PMID:25485875
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Organizing Space Shuttle parametric data for maintainability
NASA Technical Reports Server (NTRS)
Angier, R. C.
1983-01-01
A model of organization and management of Space Shuttle data is proposed. Shuttle avionics software is parametrically altered by a reconfiguration process for each flight. As the flight rate approaches an operational level, current methods of data management would become increasingly complex. An alternative method is introduced, using modularized standard data, and its implications for data collection, integration, validation, and reconfiguration processes are explored. Information modules are cataloged for later use, and may be combined in several levels for maintenance. For each flight, information modules can then be selected from the catalog at a high level. These concepts take advantage of the reusability of Space Shuttle information to reduce the cost of reconfiguration as flight experience increases.
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hawk, J. D.
1975-01-01
A generalized concept for cost-effective structural design is introduced. It is assumed that decisions affecting the cost effectiveness of aerospace structures fall into three basic categories: design, verification, and operation. Within these basic categories, certain decisions concerning items such as design configuration, safety factors, testing methods, and operational constraints are to be made. All or some of the variables affecting these decisions may be treated probabilistically. Bayesian statistical decision theory is used as the tool for determining the cost optimum decisions. A special case of the general problem is derived herein, and some very useful parametric curves are developed and applied to several sample structures.
Experiment in multiple-criteria energy policy analysis
NASA Astrophysics Data System (ADS)
Ho, J. K.
1980-07-01
An international panel of energy analysts participated in an experiment to use HOPE (holistic preference evaluation): an interactive parametric linear programming method for multiple criteria optimization. The criteria of cost, environmental effect, crude oil, and nuclear fuel were considered, according to BESOM: an energy model for the US in the year 2000.
Coal-Fired Boilers at Navy Bases, Navy Energy Guidance Study, Phase II and III.
1979-05-01
several sizes were performed. Central plants containing four equal-sized boilers and central flue gas desulfurization facilities were shown to be less...Conceptual design and parametric cost studies of steam and power generation systems using coal-fired stoker boilers and stack gas scrubbers in
DOT National Transportation Integrated Search
1975-03-01
parametric variation of demand density was used to compare service level and cost of two alternative systems for providing low density feeder service. Supply models for fixed route and flexible route service were developed and applied to determine ra...
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Commercial launch systems: A risky investment?
NASA Astrophysics Data System (ADS)
Dupnick, Edwin; Skratt, John
1996-03-01
A myriad of evolutionary paths connect the current state of government-dominated space launch operations to true commercial access to space. Every potential path requires the investment of private capital sufficient to fund the commercial venture with a perceived risk/return ratio acceptable to the investors. What is the private sector willing to invest? Does government participation reduce financial risk? How viable is a commercial launch system without government participation and support? We examine the interplay between various forms of government participation in commercial launch system development, alternative launch system designs, life cycle cost estimates, and typical industry risk aversion levels. The boundaries of this n-dimensional envelope are examined with an ECON-developed business financial model which provides for the parametric assessment and interaction of SSTO design variables (including various operational scenarios with financial variables including debt/equity assumptions, and commercial enterprise burden rates on various functions. We overlay this structure with observations from previous ECON research which characterize financial risk aversion levels for selected industrial sectors in terms of acceptable initial lump-sum investments, cumulative investments, probability of failure, payback periods, and ROI. The financial model allows the construction of parametric tradeoffs based on ranges of variables which can be said to actually encompass the ``true'' cost of operations and determine what level of ``true'' costs can be tolerated by private capitalization.
Wilson, Iain; Paul Barrett, Michael; Sinha, Ashish; Chan, Shirley
2014-11-01
Elderly patients are often judged to be fit for emergency surgery based on age alone. This study identified risk factors predictive of in-hospital mortality amongst octogenarians undergoing emergency general surgery. A retrospective review of octogenarians undergoing emergency general surgery over 3 years was performed. Parametric survival analysis using Cox multivariate regression model was used to identify risk factors predictive of in-hospital mortality. Hazard ratios (HR) and corresponding 95% confidence interval were calculated. Seventy-three patients with a median age of 84 years were identified. Twenty-eight (38%) patients died post-operatively. Multivariate analysis identified ASA grade (ASA 5 HR 23.4 95% CI 2.38-230, p = 0.007) and chronic obstructive pulmonary disease (COPD) (HR 3.35 95% CI 1.15-9.69, p = 0.026) to be the only significant predictors of in-hospital mortality. Identification of high risk surgical patients should be based on physiological fitness for surgery rather than chronological age. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Multivariate decoding of brain images using ordinal regression.
Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F
2013-11-01
Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Scradeanu, D.; Pagnejer, M.
2012-04-01
The purpose of the works is to evaluate the uncertainty of the hydrodynamic model for a multilayered geological structure, a potential trap for carbon dioxide storage. The hydrodynamic model is based on a conceptual model of the multilayered hydrostructure with three components: 1) spatial model; 2) parametric model and 3) energy model. The necessary data to achieve the three components of the conceptual model are obtained from: 240 boreholes explored by geophysical logging and seismic investigation, for the first two components, and an experimental water injection test for the last one. The hydrodinamic model is a finite difference numerical model based on a 3D stratigraphic model with nine stratigraphic units (Badenian and Oligocene) and a 3D multiparameter model (porosity, permeability, hydraulic conductivity, storage coefficient, leakage etc.). The uncertainty of the two 3D models was evaluated using multivariate geostatistical tools: a)cross-semivariogram for structural analysis, especially the study of anisotropy and b)cokriging to reduce estimation variances in a specific situation where is a cross-correlation between a variable and one or more variables that are undersampled. It has been identified important differences between univariate and bivariate anisotropy. The minimised uncertainty of the parametric model (by cokriging) was transferred to hydrodynamic model. The uncertainty distribution of the pressures generated by the water injection test has been additional filtered by the sensitivity of the numerical model. The obtained relative errors of the pressure distribution in the hydrodynamic model are 15-20%. The scientific research was performed in the frame of the European FP7 project "A multiple space and time scale approach for the quantification of deep saline formation for CO2 storage(MUSTANG)".
SHIPS: Spectral Hierarchical Clustering for the Inference of Population Structure in Genetic Studies
Bouaziz, Matthieu; Paccard, Caroline; Guedj, Mickael; Ambroise, Christophe
2012-01-01
Inferring the structure of populations has many applications for genetic research. In addition to providing information for evolutionary studies, it can be used to account for the bias induced by population stratification in association studies. To this end, many algorithms have been proposed to cluster individuals into genetically homogeneous sub-populations. The parametric algorithms, such as Structure, are very popular but their underlying complexity and their high computational cost led to the development of faster parametric alternatives such as Admixture. Alternatives to these methods are the non-parametric approaches. Among this category, AWclust has proven efficient but fails to properly identify population structure for complex datasets. We present in this article a new clustering algorithm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS), based on a divisive hierarchical clustering strategy, allowing a progressive investigation of population structure. This method takes genetic data as input to cluster individuals into homogeneous sub-populations and with the use of the gap statistic estimates the optimal number of such sub-populations. SHIPS was applied to a set of simulated discrete and admixed datasets and to real SNP datasets, that are data from the HapMap and Pan-Asian SNP consortium. The programs Structure, Admixture, AWclust and PCAclust were also investigated in a comparison study. SHIPS and the parametric approach Structure were the most accurate when applied to simulated datasets both in terms of individual assignments and estimation of the correct number of clusters. The analysis of the results on the real datasets highlighted that the clusterings of SHIPS were the more consistent with the population labels or those produced by the Admixture program. The performances of SHIPS when applied to SNP data, along with its relatively low computational cost and its ease of use make this method a promising solution to infer fine-scale genetic patterns. PMID:23077494
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
NASA Technical Reports Server (NTRS)
Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.
1972-01-01
A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.
NASA Technical Reports Server (NTRS)
1973-01-01
The general goal of this task, STDN Antenna and Preamplifier G/T Study, was to determine cost-effective combinations of antennas and preamplifiers for several sets of conditions for frequency, antenna elevation angle, and rain. The output of the study includes design curves and tables which indicate the best choice of antenna size and preamplifier type to provide a given G/T performance. The report indicates how to evaluate the cost effectiveness of proposed improvements to a given station. Certain parametric variations are presented to emphasize the improvement available by reducing RF losses and improving the antenna feed.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Team X Report #1401: Exoplanet Coronagraph STDT Study 2013-06
NASA Technical Reports Server (NTRS)
Warfield, Keith
2013-01-01
This document is intended to stimulate discussion of the topic described. All technical and cost analyses are preliminary. This document is not a commitment to work, but is a precursor to a formal proposal if it generates sufficient mutual interest. The data contained in this document may not be modified in any way. Cost estimates described or summarized in this document were generated as part of a preliminary, first-order cost class identification as part of an early trade space study, are based on JPL-internal parametric cost modeling, assume a JPL in-house build, and do not constitute a commitment on the part of JPL or Caltech. JPL and Team X add cost reserves for development and operations. Unadjusted estimate totals and cost reserve allocations would be revised as needed in future more-detailed studies as appropriate for the specific cost-risks for a given mission concept.
Linking the Weather Generator with Regional Climate Model
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan
2013-04-01
One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).
When Unified Teacher Pay Scales Meet Differential Alternative Returns
ERIC Educational Resources Information Center
Walsh, Patrick
2014-01-01
This paper quantifies the extent to which unified teacher pay scales and differential alternatives produce opportunity costs that are asymmetric in math and verbal skills. Data from the Baccalaureate and Beyond 1997 and 2003 follow-ups are used to estimate a fully parametric, selection-corrected wage equation for nonteachers, which is then used to…
Manufacturing information system
NASA Astrophysics Data System (ADS)
Allen, D. K.; Smith, P. R.; Smart, M. J.
1983-12-01
The size and cost of manufacturing equipment has made it extremely difficult to perform realistic modeling and simulation of the manufacturing process in university research laboratories. Likewise the size and cost factors, coupled with many uncontrolled variables of the production situation has even made it difficult to perform adequate manufacturing research in the industrial setting. Only the largest companies can afford manufacturing research laboratories; research results are often held proprietary and seldom find their way into the university classroom to aid in education and training of new manufacturing engineers. It is the purpose for this research to continue the development of miniature prototype equipment suitable for use in an integrated CAD/CAM Laboratory. The equipment being developed is capable of actually performing production operations (e.g. drilling, milling, turning, punching, etc.) on metallic and non-metallic workpieces. The integrated CAD/CAM Mini-Lab is integrating high resolution, computer graphics, parametric design, parametric N/C parts programmings, CNC machine control, automated storage and retrieval, with robotics materials handling. The availability of miniature CAD/CAM laboratory equipment will provide the basis for intensive laboratory research on manufacturing information systems.
Reese, Jared C; Karsy, Michael; Twitchell, Spencer; Bisson, Erica F
2018-04-11
Examining the costs of single- and multilevel anterior cervical discectomy and fusion (ACDF) is important for the identification of cost drivers and potentially reducing patient costs. A novel tool at our institution provides direct costs for the identification of potential drivers. To assess perioperative healthcare costs for patients undergoing an ACDF. Patients who underwent an elective ACDF between July 2011 and January 2017 were identified retrospectively. Factors adding to total cost were placed into subcategories to identify the most significant contributors, and potential drivers of total cost were evaluated using a multivariable linear regression model. A total of 465 patients (mean, age 53 ± 12 yr, 54% male) met the inclusion criteria for this study. The distribution of total cost was broken down into supplies/implants (39%), facility utilization (37%), physician fees (14%), pharmacy (7%), imaging (2%), and laboratory studies (1%). A multivariable linear regression analysis showed that total cost was significantly affected by the number of levels operated on, operating room time, and length of stay. Costs also showed a narrow distribution with few outliers and did not vary significantly over time. These results suggest that facility utilization and supplies/implants are the predominant cost contributors, accounting for 76% of the total cost of ACDF procedures. Efforts at lowering costs within these categories should make the most impact on providing more cost-effective care.
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.
Estimating the Life Cycle Cost of Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
A space system's Life Cycle Cost (LCC) includes design and development, launch and emplacement, and operations and maintenance. Each of these cost factors is usually estimated separately. NASA uses three different parametric models for the design and development cost of crewed space systems; the commercial PRICE-H space hardware cost model, the NASA-Air Force Cost Model (NAFCOM), and the Advanced Missions Cost Model (AMCM). System mass is an important parameter in all three models. System mass also determines the launch and emplacement cost, which directly depends on the cost per kilogram to launch mass to Low Earth Orbit (LEO). The launch and emplacement cost is the cost to launch to LEO the system itself and also the rockets, propellant, and lander needed to emplace it. The ratio of the total launch mass to payload mass depends on the mission scenario and destination. The operations and maintenance costs include any material and spares provided, the ground control crew, and sustaining engineering. The Mission Operations Cost Model (MOCM) estimates these costs as a percentage of the system development cost per year.
Topics in the two-dimensional sampling and reconstruction of images. [in remote sensing
NASA Technical Reports Server (NTRS)
Schowengerdt, R.; Gray, S.; Park, S. K.
1984-01-01
Mathematical analysis of image sampling and interpolative reconstruction is summarized and extended to two dimensions for application to data acquired from satellite sensors such as the Thematic mapper and SPOT. It is shown that sample-scene phase influences the reconstruction of sampled images, adds a considerable blur to the average system point spread function, and decreases the average system modulation transfer function. It is also determined that the parametric bicubic interpolator with alpha = -0.5 is more radiometrically accurate than the conventional bicubic interpolator with alpha = -1, and this at no additional cost. Finally, the parametric bicubic interpolator is found to be suitable for adaptive implementation by relating the alpha parameter to the local frequency content of an image.
Near-term hybrid vehicle program, phase 1. Appendix D: Sensitivity analysis resport
NASA Technical Reports Server (NTRS)
1979-01-01
Parametric analyses, using a hybrid vehicle synthesis and economics program (HYVELD) are described investigating the sensitivity of hybrid vehicle cost, fuel usage, utility, and marketability to changes in travel statistics, energy costs, vehicle lifetime and maintenance, owner use patterns, internal combustion engine (ICE) reference vehicle fuel economy, and drive-line component costs and type. The lowest initial cost of the hybrid vehicle would be $1200 to $1500 higher than that of the conventional vehicle. For nominal energy costs ($1.00/gal for gasoline and 4.2 cents/kWh for electricity), the ownership cost of the hybrid vehicle is projected to be 0.5 to 1.0 cents/mi less than the conventional ICE vehicle. To attain this ownership cost differential, the lifetime of the hybrid vehicle must be extended to 12 years and its maintenance cost reduced by 25 percent compared with the conventional vehicle. The ownership cost advantage of the hybrid vehicle increases rapidly as the price of fuel increases from $1 to $2/gal.
Le, Quang A; Bae, Yuna H; Kang, Jenny H
2016-10-01
The EMILIA trial demonstrated that trastuzumab emtansine (T-DM1) significantly increased the median profession-free and overall survival relative to combination therapy with lapatinib plus capecitabine (LC) in patients with HER2-positive advanced breast cancer (ABC) previously treated with trastuzumab and a taxane. We performed an economic analysis of T-DM1 as a second-line therapy compared to LC and monotherapy with capecitabine (C) from both perspectives of the US payer and society. We developed four possible Markov models for ABC to compare the projected life-time costs and outcomes of T-DM1, LC, and C. Model transition probabilities were estimated from the EMILIA and EGF100151 clinical trials. Direct costs of the therapies, major adverse events, laboratory tests, and disease progression, indirect costs (productivity losses due to morbidity and mortality), and health utilities were obtained from published sources. The models used 3 % discount rate and reported in 2015 US dollars. Probabilistic sensitivity analysis and model averaging were used to account for model parametric and structural uncertainty. When incorporating both model parametric and structural uncertainty, the resulting incremental cost-effectiveness ratios (ICER) comparing T-DM1 to LC and T-DM1 to C were $183,828 per quality-adjusted life year (QALY) and $126,001/QALY from the societal perspective, respectively. From the payer's perspective, the ICERs were $220,385/QALY (T-DM1 vs. LC) and $168,355/QALY (T-DM1 vs. C). From both perspectives of the US payer and society, T-DM1 is not cost-effective when comparing to the LC combination therapy at a willingness-to-pay threshold of $150,000/QALY. T-DM1 might have a better chance to be cost-effective compared to capecitabine monotherapy from the US societal perspective.
Parametric vs. non-parametric daily weather generator: validation and comparison
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin
2016-04-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database.
Parametric evaluation of the cost effectiveness of Shuttle payload vibroacoustic test plans
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Keegan, W. B.; Young, J. P.
1978-01-01
Consideration is given to alternate vibroacoustic test plans for sortie and free flyer Shuttle payloads. Statistical decision models for nine test plans provide a viable method of evaluating the cost effectiveness of alternate vibroacoustic test plans and the associated test levels. The methodology is a major step toward the development of a useful tool for the quantitative tailoring of vibroacoustic test programs to sortie and free flyer payloads. A broader application of the methodology is now possible by the use of the OCTAVE computer code.
A parametric determination of transport aircraft price
NASA Technical Reports Server (NTRS)
Anderson, J. L.
1975-01-01
Cost per unit weight and other airframe and engine cost relations are given. Power equations representing these relations are presented for six airplane groups: general aircraft, turboprop transports, small jet transports, conventional jet transports, wide-body transports, supersonic transports, and for reciprocating, turboshaft, and turbothrust engines. Market prices calculated for a number of aircraft by use of the equations together with the aircraft characteristics are in reasonably good agreement with actual prices. Such price analyses are of value in the assessment of new aircraft devices and designs and potential research and development programs.
Research on AutoCAD secondary development and function expansion based on VBA technology
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Gu, Yehuan
2017-06-01
AutoCAD is the most widely used drawing tool among the similar design drawing products. In the process of drawing different types of design drawings of the same product, there are a lot of repetitive and single work contents. The traditional manual method uses a drawing software AutoCAD drawing graphics with low efficiency, high error rate and high input cost shortcomings and many more. In order to solve these problems, the design of the parametric drawing system of the hot-rolled I-beam (steel beam) cross-section is completed by using the VBA secondary development tool and the Access database software with large-capacity storage data, and the analysis of the functional extension of the plane drawing and the parametric drawing design in this paper. For the secondary development of AutoCAD functions, the system drawing work will be simplified and work efficiency also has been greatly improved. This introduction of parametric design of AutoCAD drawing system to promote the industrial mass production and related industries economic growth rate similar to the standard I-beam hot-rolled products.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
The cost of doing business: cost structure of electronic immunization registries.
Fontanesi, John M; Flesher, Don S; De Guire, Michelle; Lieberthal, Allan; Holcomb, Kathy
2002-10-01
To predict the true cost of developing and maintaining an electronic immunization registry, and to set the framework for developing future cost-effective and cost-benefit analysis. Primary data collected at three immunization registries located in California, accounting for 90 percent of all immunization records in registries in the state during the study period. A parametric cost analysis compared registry development and maintenance expenditures to registry performance requirements. Data were collected at each registry through interviews, reviews of expenditure records, technical accomplishments development schedules, and immunization coverage rates. The cost of building immunization registries is predictable and independent of the hardware/software combination employed. The effort requires four man-years of technical effort or approximately $250,000 in 1998 dollars. Costs for maintaining a registry were approximately $5,100 per end user per three-year period. There is a predictable cost structure for both developing and maintaining immunization registries. The cost structure can be used as a framework for examining the cost-effectiveness and cost-benefits of registries. The greatest factor effecting improvement in coverage rates was ongoing, user-based administrative investment.
Bulsei, Julie; Darlington, Meryl; Durand-Zaleski, Isabelle; Azizi, Michel
2018-04-01
Whilst much uncertainty exists as to the efficacy of renal denervation (RDN), the positive results of the DENERHTN study in France confirmed the interest of an economic evaluation in order to assess efficiency of RDN and inform local decision makers about the costs and benefits of this intervention. The uncertainty surrounding both the outcomes and the costs can be described using health economic methods such as the non-parametric bootstrap. Internationally, numerous health economic studies using a cost-effectiveness model to assess the impact of RDN in terms of cost and effectiveness compared to antihypertensive medical treatment have been conducted. The DENERHTN cost-effectiveness study was the first health economic evaluation specifically designed to assess the cost-effectiveness of RDN using individual data. Using the DENERHTN results as an example, we provide here a summary of the principle methods used to perform a cost-effectiveness analysis.
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2017-07-01
The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.
NASA Technical Reports Server (NTRS)
Prakash, OM, II
1991-01-01
Three linear controllers are desiged to regulate the end effector of the Space Shuttle Remote Manipulator System (SRMS) operating in Position Hold Mode. In this mode of operation, jet firings of the Orbiter can be treated as disturbances while the controller tries to keep the end effector stationary in an orbiter-fixed reference frame. The three design techniques used include: the Linear Quadratic Regulator (LQR), H2 optimization, and H-infinity optimization. The nonlinear SRMS is linearized by modelling the effects of the significant nonlinearities as uncertain parameters. Each regulator design is evaluated for robust stability in light of the parametric uncertanties using both the small gain theorem with an H-infinity norm and the less conservative micro-analysis test. All three regulator designs offer significant improvement over the current system on the nominal plant. Unfortunately, even after dropping performance requirements and designing exclusively for robust stability, robust stability cannot be achieved. The SRMS suffers from lightly damped poles with real parametric uncertainties. Such a system renders the micro-analysis test, which allows for complex peturbations, too conservative.
Forensic discrimination of copper wire using trace element concentrations.
Dettman, Joshua R; Cassabaum, Alyssa A; Saunders, Christopher P; Snyder, Deanna L; Buscaglia, JoAnn
2014-08-19
Copper may be recovered as evidence in high-profile cases such as thefts and improvised explosive device incidents; comparison of copper samples from the crime scene and those associated with the subject of an investigation can provide probative associative evidence and investigative support. A solution-based inductively coupled plasma mass spectrometry method for measuring trace element concentrations in high-purity copper was developed using standard reference materials. The method was evaluated for its ability to use trace element profiles to statistically discriminate between copper samples considering the precision of the measurement and manufacturing processes. The discriminating power was estimated by comparing samples chosen on the basis of the copper refining and production process to represent the within-source (samples expected to be similar) and between-source (samples expected to be different) variability using multivariate parametric- and empirical-based data simulation models with bootstrap resampling. If the false exclusion rate is set to 5%, >90% of the copper samples can be correctly determined to originate from different sources using a parametric-based model and >87% with an empirical-based approach. These results demonstrate the potential utility of the developed method for the comparison of copper samples encountered as forensic evidence.
Comparison of System Identification Techniques for the Hydraulic Manipulator Test Bed (HMTB)
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1996-01-01
In this thesis linear, dynamic, multivariable state-space models for three joints of the ground-based Hydraulic Manipulator Test Bed (HMTB) are identified. HMTB, housed at the NASA Langley Research Center, is a ground-based version of the Dexterous Orbital Servicing System (DOSS), a representative space station manipulator. The dynamic models of the HMTB manipulator will first be estimated by applying nonparametric identification methods to determine each joint's response characteristics using various input excitations. These excitations include sum of sinusoids, pseudorandom binary sequences (PRBS), bipolar ramping pulses, and chirp input signals. Next, two different parametric system identification techniques will be applied to identify the best dynamical description of the joints. The manipulator is localized about a representative space station orbital replacement unit (ORU) task allowing the use of linear system identification methods. Comparisons, observations, and results of both parametric system identification techniques are discussed. The thesis concludes by proposing a model reference control system to aid in astronaut ground tests. This approach would allow the identified models to mimic on-orbit dynamic characteristics of the actual flight manipulator thus providing astronauts with realistic on-orbit responses to perform space station tasks in a ground-based environment.
Parametric analysis of ATT configurations.
NASA Technical Reports Server (NTRS)
Lange, R. H.
1972-01-01
This paper describes the results of a Lockheed parametric analysis of the performance, environmental factors, and economics of an advanced commercial transport envisioned for operation in the post-1985 time period. The design parameters investigated include cruise speeds from Mach 0.85 to Mach 1.0, passenger capacities from 200 to 500, ranges of 2800 to 5500 nautical miles, and noise level criteria. NASA high performance configurations and alternate configurations are operated over domestic and international route structures. Indirect and direct costs and return on investment are determined for approximately 40 candidate aircraft configurations. The candidate configurations are input to an aircraft sizing and performance program which includes a subroutine for noise criteria. Comparisons are made between preferred configurations on the basis of maximum return on investment as a function of payload, range, and design cruise speed.
The reduced basis method for the electric field integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Minimization of transmission cost in decentralized control systems
NASA Technical Reports Server (NTRS)
Wang, S.-H.; Davison, E. J.
1978-01-01
This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.
Ku, Li-Jung Elizabeth; Pai, Ming-Chyi; Shih, Pei-Yu
2016-01-01
Given the shortage of cost-of-illness studies in dementia outside of the Western population, the current study estimated the annual cost of dementia in Taiwan and assessed whether different categories of care costs vary by severity using multiple disease-severity measures. This study included 231 dementia patient-caregiver dyads in a dementia clinic at a national university hospital in southern Taiwan. Three disease measures including cognitive, functional, and behavioral disturbances were obtained from patients based on medical history. A societal perspective was used to estimate the total costs of dementia according to three cost sub-categories. The association between dementia severity and cost of care was examined through bivariate and multivariate analyses. Total costs of care for moderate dementia patient were 1.4 times the costs for mild dementia and doubled from mild to severe dementia among our community-dwelling dementia sample. Multivariate analysis indicated that functional declines had a greater impact on all cost outcomes as compared to behavioral disturbance, which showed no impact on any costs. Informal care costs accounted for the greatest share in total cost of care for both mild (42%) and severe (43%) dementia patients. Since the total costs of dementia increased with severity, providing care to delay disease progression, with a focus on maintaining patient physical function, may reduce the overall cost of dementia. The greater contribution of informal care to total costs as opposed to social care also suggests a need for more publicly-funded long-term care services to assist family caregivers of dementia patients in Taiwan.
Episiotomy increases perineal laceration length in primiparous women.
Nager, C W; Helliwell, J P
2001-08-01
The aim of this study was to determine the clinical factors that contribute to posterior perineal laceration length. A prospective observational study was performed in 80 consenting, mostly primiparous women with term pregnancies. Posterior perineal lacerations were measured immediately after delivery. Numerous maternal, fetal, and operator variables were evaluated against laceration length and degree of tear. Univariate and multivariate regression analyses were performed to evaluate laceration length and parametric clinical variables. Nonparametric clinical variables were evaluated against laceration length by the Mann-Whitney U test. A multivariate stepwise linear regression equation revealed that episiotomy adds nearly 3 cm to perineal lacerations. Tear length was highly associated with the degree of tear (R = 0.86, R(2) = 0.73) and the risk of recognized anal sphincter disruption. None of 35 patients without an episiotomy had a recognized anal sphincter disruption, but 6 of 27 patients with an episiotomy did (P <.001). Body mass index was the only maternal or fetal variable that showed even a slight correlation with laceration length (R = 0.30, P =.04). Episiotomy is the overriding determinant of perineal laceration length and recognized anal sphincter disruption.
Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.
Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B
2015-02-10
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
An Affordability Comparison Tool (ACT) for Space Transportation
NASA Technical Reports Server (NTRS)
McCleskey, C. M.; Bollo, T. R.; Garcia, J. L.
2012-01-01
NASA bas recently emphasized the importance of affordability for Commercial Crew Development Program (CCDP), Space Launch Systems (SLS) and Multi-Purpose Crew Vehicle (MPCV). System architects and designers are challenged to come up with architectures and designs that do not bust the budget. This paper describes the Affordability Comparison Tool (ACT) analyzes different systems or architecture configurations for affordability that allows for a comparison of: total life cycle cost; annual recurring costs, affordability figures-of-merit, such as cost per pound, cost per seat, and cost per flight, as well as productivity measures, such as payload throughput. Although ACT is not a deterministic model, the paper develops algorithms and parametric factors that use characteristics of the architectures or systems being compared to produce important system outcomes (figures-of-merit). Example applications of outcome figures-of-merit are also documented to provide the designer with information on the relative affordability and productivity of different space transportation applications.
NASA Astrophysics Data System (ADS)
Dragan, Laurentiu; Watt, Stephen M.
Computer algebra in scientific computation squarely faces the dilemma of natural mathematical expression versus efficiency. While higher-order programming constructs and parametric polymorphism provide a natural and expressive language for mathematical abstractions, they can come at a considerable cost. We investigate how deeply nested type constructions may be optimized to achieve performance similar to that of hand-tuned code written in lower-level languages.
Out-of-pocket fertility patient expense: data from a multicenter prospective infertility cohort.
Wu, Alex K; Odisho, Anobel Y; Washington, Samuel L; Katz, Patricia P; Smith, James F
2014-02-01
The high costs of fertility care may deter couples from seeking care. Urologists often are asked about the costs of these treatments. To our knowledge previous studies have not addressed the direct out-of-pocket costs to couples. We characterized these expenses in patients seeking fertility care. Couples were prospectively recruited from 8 community and academic reproductive endocrinology clinics. Each participating couple completed face-to-face or telephone interviews and cost diaries at study enrollment, and 4, 10 and 18 months of care. We determined overall out-of-pocket costs, in addition to relationships between out-of-pocket costs and treatment type, clinical outcomes and socioeconomic characteristics on multivariate linear regression analysis. A total of 332 couples completed cost diaries and had data available on treatment and outcomes. Average age was 36.8 and 35.6 years in men and women, respectively. Of this cohort 19% received noncycle based therapy, 4% used ovulation induction medication only, 22% underwent intrauterine insemination and 55% underwent in vitro fertilization. The median overall out-of-pocket expense was $5,338 (IQR 1,197-19,840). Couples using medication only had the lowest median out-of-pocket expenses at $912 while those using in vitro fertilization had the highest at $19,234. After multivariate adjustment the out-of-pocket expense was not significantly associated with successful pregnancy. On multivariate analysis couples treated with in vitro fertilization spent an average of $15,435 more than those treated with intrauterine insemination. Couples spent about $6,955 for each additional in vitro fertilization cycle. These data provide real-world estimates of out-of-pocket costs, which can be used to help couples plan for expenses that they may incur with treatment. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
The 20 kW battery study program
NASA Technical Reports Server (NTRS)
1971-01-01
Six battery configurations were selected for detailed study and these are described. A computer program was modified for use in estimation of the weights, costs, and reliabilities of each of the configurations, as a function of several important independent variables, such as system voltage, battery voltage ratio (battery voltage/bus voltage), and the number of parallel units into which each of the components of the power subsystem was divided. The computer program was used to develop the relationship between the independent variables alone and in combination, and the dependent variables: weight, cost, and availability. Parametric data, including power loss curves, are given.
Royston, Patrick; Sauerbrei, Willi
2016-01-01
In a recent article, Royston (2015, Stata Journal 15: 275-291) introduced the approximate cumulative distribution (acd) transformation of a continuous covariate x as a route toward modeling a sigmoid relationship between x and an outcome variable. In this article, we extend the approach to multivariable modeling by modifying the standard Stata program mfp. The result is a new program, mfpa, that has all the features of mfp plus the ability to fit a new model for user-selected covariates that we call fp1( p 1 , p 2 ). The fp1( p 1 , p 2 ) model comprises the best-fitting combination of a dimension-one fractional polynomial (fp1) function of x and an fp1 function of acd ( x ). We describe a new model-selection algorithm called function-selection procedure with acd transformation, which uses significance testing to attempt to simplify an fp1( p 1 , p 2 ) model to a submodel, an fp1 or linear model in x or in acd ( x ). The function-selection procedure with acd transformation is related in concept to the fsp (fp function-selection procedure), which is an integral part of mfp and which is used to simplify a dimension-two (fp2) function. We describe the mfpa command and give univariable and multivariable examples with real data to demonstrate its use.
NASA Astrophysics Data System (ADS)
Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha
2014-03-01
Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
State Pupil Transportation Funding: Equity and Efficiency.
ERIC Educational Resources Information Center
Zeitlin, Laurie S.
1990-01-01
Explores the influences state departments of education have on the cost and quality of pupil transportation. Evaluates the following state funding methodologies: (1) actual costs incurred; (2) a flat rate per unit; or (3) a multivariate calculation in providing service efficiently and equitably between districts. (MLF)
Phase mismatched optical parametric generation in semiconductor magnetoplasma
NASA Astrophysics Data System (ADS)
Dubey, Swati; Ghosh, S.; Jain, Kamal
2017-05-01
Optical parametric generation involves the interaction of pump, signal, and idler waves satisfying law of conservation of energy. Phase mismatch parameter plays important role for the spatial distribution of the field along the medium. In this paper instead of exactly matching wave vector, a small mismatch is admitted with a degree of phase velocity mismatch between these waves. Hence the medium must possess certain finite coherence length. This wave mixing process is well explained by coupled mode theory and one dimensional hydrodynamic model. Based on this scheme, expressions for threshold pump field and transmitted intensity have been derived. It is observed that the threshold pump intensity and transmitted intensity can be manipulated by varying doping concentration and magnetic field under phase mismatched condition. A compound semiconductor crystal of n-InSb is assumed to be shined at 77 K by a 10.6μm CO2 laser with photon energy well below band gap energy of the crystal, so that only free charge carrier influence the optical properties of the medium for the I.R. parametric generation in a semiconductor plasma medium. Favorable parameters were explored to incite the said process keeping in mind the cost effectiveness and conversion efficiency of the process.
The detection of pleural effusion using a parametric EIT technique.
Arad, M; Zlochiver, S; Davidson, T; Shoenfeld, Y; Adunsky, A; Abboud, S
2009-04-01
The bioimpedance technique provides a safe, low-cost and non-invasive alternative for routine monitoring of lung fluid levels in patients. In this study we have investigated the feasibility of bioimpedance measurements to monitor pleural effusion (PE) patients. The measurement system (eight-electrode thoracic belt, opposite sequential current injections, 3 mA, 20 kHz) employed a parametric reconstruction algorithm to assess the left and right lung resistivity values. Bioimpedance measurements were taken before and after the removal of pleural fluids, while the patient was sitting at rest during tidal respiration in order to minimize movements of the thoracic cavity. The mean resistivity difference between the lung on the side with PE and the lung on the other side was -48 Omega cm. A high correlation was found between the mean lung resistivity value before the removal of the fluids and the volume of pleural fluids removed, with a sensitivity of -0.17 Omega cm ml(-1) (linear regression, R=0.53). The present study further supports the feasibility and applicability of the bioimpedance technique, and specifically the approach of parametric left and right lung resistivity reconstruction, in monitoring lung patients.
NASA Astrophysics Data System (ADS)
Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.
2015-03-01
Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.
The Cost of Doing Business: Cost Structure of Electronic Immunization Registries
Fontanesi, John M; Flesher, Don S; De Guire, Michelle; Lieberthal, Allan; Holcomb, Kathy
2002-01-01
Objective To predict the true cost of developing and maintaining an electronic immunization registry, and to set the framework for developing future cost-effective and cost-benefit analysis. Data Sources/Study Setting Primary data collected at three immunization registries located in California, accounting for 90 percent of all immunization records in registries in the state during the study period. Study Design A parametric cost analysis compared registry development and maintenance expenditures to registry performance requirements. Data Collection/Extraction Methods Data were collected at each registry through interviews, reviews of expenditure records, technical accomplishments development schedules, and immunization coverage rates. Principal Findings The cost of building immunization registries is predictable and independent of the hardware/software combination employed. The effort requires four man-years of technical effort or approximately $250,000 in 1998 dollars. Costs for maintaining a registry were approximately $5,100 per end user per three-year period. Conclusions There is a predictable cost structure for both developing and maintaining immunization registries. The cost structure can be used as a framework for examining the cost-effectiveness and cost-benefits of registries. The greatest factor effecting improvement in coverage rates was ongoing, user-based administrative investment. PMID:12479497
Stirling heat pump external heat systems - An appliance perspective
NASA Astrophysics Data System (ADS)
Vasilakis, Andrew D.; Thomas, John F.
A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.
Stirling heat pump external heat systems: An appliance perspective
NASA Astrophysics Data System (ADS)
Vasilakis, A. D.; Thomas, J. F.
1992-08-01
A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS system was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.
NASA Technical Reports Server (NTRS)
1985-01-01
Fundamentally, the volumes of the oxidizer and fuel propellant scavenged from the orbiter and external tank determine the size and weight of the scavenging system. The optimization of system dimensions and weights is stimulated by the requirement to minimize the use of partial length of the orbiter payload bay. Thus, the cost estimates begin with weights established for the optimum design. Both the design, development, test, and evaluation and theoretical first unit hardware production costs are estimated from parametric cost weight scaling relations for four subsystems. For cryogenic propellants, the widely differing characteristics of the oxidizer and the fuel lead to two separate tank subsystems, in addition to the electrical and instrumentation subsystems. Hardwares costs also involve quantity, as an independent variable, since the number of production scavenging systems is not firm. For storable propellants, since the tankage volume of the oxidizer and fuel are equal, the hardware production costs for developing these systems are lower than for cryogenic propellants.
Process Cost Modeling for Multi-Disciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Bao, Han P.; Freeman, William (Technical Monitor)
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to highlight their inappropriateness for what is really needed at the conceptual phase of the design process. The First-Order Process Velocity Cost Model (FOPV) is discussed at length in the next section. This is followed by an application of the FOPV cost model to a generic wing. For designs that have no precedence as far as acquisition costs are concerned, cost data derived from the FOPV cost model may not be accurate enough because of new requirements for shape complexity, material, equipment and precision/tolerance. The concept of Cost Modulus is introduced at this point to compensate for these new burdens on the basic processes. This is treated in section 5. The cost of a design must be conveniently linked to its CAD representation. The interfacing of CAD models and spreadsheets containing the cost equations is the subject of the next section, section 6. The last section of the report is a summary of the progress made so far, and the anticipated research work to be achieved in the future.
Bridge maintenance to enhance corrosion resistance and performance of steel girder bridges
NASA Astrophysics Data System (ADS)
Moran Yanez, Luis M.
The integrity and efficiency of any national highway system relies on the condition of the various components. Bridges are fundamental elements of a highway system, representing an important investment and a strategic link that facilitates the transport of persons and goods. The cost to rehabilitate or replace a highway bridge represents an important expenditure to the owner, who needs to evaluate the correct time to assume that cost. Among the several factors that affect the condition of steel highway bridges, corrosion is identified as the main problem. In the USA corrosion is the primary cause of structurally deficient steel bridges. The benefit of regular high-pressure superstructure washing and spot painting were evaluated as effective maintenance activities to reduce the corrosion process. The effectiveness of steel girder washing was assessed by developing models of corrosion deterioration of composite steel girders and analyzing steel coupons at the laboratory under atmospheric corrosion for two alternatives: when high-pressure washing was performed and when washing was not considered. The effectiveness of spot painting was assessed by analyzing the corrosion on steel coupons, with small damages, unprotected and protected by spot painting. A parametric analysis of corroded steel girder bridges was considered. The emphasis was focused on the parametric analyses of corroded steel girder bridges under two alternatives: (a) when steel bridge girder washing is performed according to a particular frequency, and (b) when no bridge washing is performed to the girders. The reduction of structural capacity was observed for both alternatives along the structure service life, estimated at 100 years. An economic analysis, using the Life-Cycle Cost Analysis method, demonstrated that it is more cost-effective to perform steel girder washing as a scheduled maintenance activity in contrast to the no washing alternative.
Efficient solution of a multi objective fuzzy transportation problem
NASA Astrophysics Data System (ADS)
Vidhya, V.; Ganesan, K.
2018-04-01
In this paper we present a methodology for the solution of multi-objective fuzzy transportation problem when all the cost and time coefficients are trapezoidal fuzzy numbers and the supply and demand are crisp numbers. Using a new fuzzy arithmetic on parametric form of trapezoidal fuzzy numbers and a new ranking method all efficient solutions are obtained. The proposed method is illustrated with an example.
The report gives results of a series of computer runs using the DOE-2.1E building energy model, simulating a small office in a hot, humid climate (Miami). These simulations assessed the energy and relative humidity (RH) penalties when the outdoor air (OA) ventilation rate is inc...
Ku, Li-Jung Elizabeth; Pai, Ming-Chyi; Shih, Pei-Yu
2016-01-01
Objective Given the shortage of cost-of-illness studies in dementia outside of the Western population, the current study estimated the annual cost of dementia in Taiwan and assessed whether different categories of care costs vary by severity using multiple disease-severity measures. Methods This study included 231 dementia patient–caregiver dyads in a dementia clinic at a national university hospital in southern Taiwan. Three disease measures including cognitive, functional, and behavioral disturbances were obtained from patients based on medical history. A societal perspective was used to estimate the total costs of dementia according to three cost sub-categories. The association between dementia severity and cost of care was examined through bivariate and multivariate analyses. Results Total costs of care for moderate dementia patient were 1.4 times the costs for mild dementia and doubled from mild to severe dementia among our community-dwelling dementia sample. Multivariate analysis indicated that functional declines had a greater impact on all cost outcomes as compared to behavioral disturbance, which showed no impact on any costs. Informal care costs accounted for the greatest share in total cost of care for both mild (42%) and severe (43%) dementia patients. Conclusions Since the total costs of dementia increased with severity, providing care to delay disease progression, with a focus on maintaining patient physical function, may reduce the overall cost of dementia. The greater contribution of informal care to total costs as opposed to social care also suggests a need for more publicly-funded long-term care services to assist family caregivers of dementia patients in Taiwan. PMID:26859891
Orthognathic cases: what are the surgical costs?
Kumar, Sanjay; Williams, Alison C; Ireland, Anthony J; Sandy, Jonathan R
2008-02-01
This multicentre, retrospective, study assessed the cost, and factors influencing the cost, of combined orthodontic and surgical treatment for dentofacial deformity. The sample, from a single region in England, comprised 352 subjects treated in 11 hospital orthodontic units who underwent orthognathic surgery between 1 January 1995 and 31 March 2000. Statistical analysis of the data was undertaken using non-parametric tests (Spearman and Wilcoxon signed rank). The average total treatment cost for the tax year from 6 April 2000 to 5 April 2001 was euro6360.19, with costs ranging from euro3835.90 to euro12 150.55. The average operating theatre cost was euro2189.54 and the average inpatient care (including the cost of the intensive care unit and ward stay) was euro1455.20. Joint clinic costs comprised, on average, 10 per cent of the total cost, whereas appointments in other specialities, apart from orthodontics, comprised 2 per cent of the total costs. Differences in the observed costings between the units were unexplained but may reflect surgical difficulties, differences in clinical practice, or efficiency of patient care. These indicators need to be considered in future outcome studies for orthognathic patients.
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2017-01-01
Building a parametric cost model is hard work. The data is noisy and often does not behave like we want it to. We need statistics to give us an indication of the goodness of our models, but; statistics can be manipulated and mislead. On top of all of that, our own very human biases can lead us astray; causing us to see patterns in the noise and draw false conclusions from the data. Yet, it is the data itself that is the foundation for making better cost estimates and cost models. I believe the mistake we often make is we believe that our models are representative of the data; that our models summarize the experiences, the knowledge, and the stories contained in the data. However, it is the opposite that is true. Our models are but imitations of reality. They give us trends, but not truth. The experiences, the knowledge, and the stories that we need in order to make good cost estimates is bound up in the data. You cannot separate good cost estimating from a knowledge of the historical data. One final thought. It is our attempts to make sense out of the randomness that leads us astray. In order to make progress as cost modelers and cost estimators, we must accept that there are real limitations on our ability to model the past and predict the future. I do not believe we should throw up our hands and say this is the best we can do. Rather, to see real improvement we must first recognize these limitations, avoid the easy but misleading solutions, and seek to find ways to better model the world we live in. I don't have any simple solutions. Perhaps the answers lie in better data or in a totally different approach to simulating how the world works. All I know is that we must do our best to speak truth to ourselves and our customers. Misleading ourselves and our customers will, in the end, result in an inability to have a positive impact on those we serve.
Direct and indirect costs among employees with diabetic retinopathy in the United States.
Lee, Lauren J; Yu, Andrew P; Cahill, Kevin E; Oglesby, Alan K; Tang, Jackson; Qiu, Ying; Birnbaum, Howard G
2008-05-01
To examine, from the employer perspective, the direct (healthcare) and indirect (workloss) costs of employees with diabetic retinopathy (DR) compared to control non-DR employees with diabetes, and within DR subgroups. Compared annual costs using claims data from 17 large companies (1999-2004). 'DR employees' (n = 2098) had >or= 1 DR (International Classification of Disease, 9th Revision [ICD-9]) diagnosis; DR subgroups included employees with diabetic macular edema (DME), proliferative DR (PDR), and employees receiving photocoagulation or vitrectomy procedures. Descriptive and multivariate tests were performed. DR employee annual direct costs were $18,218 (indirect = $3548) compared to $11,898 (indirect = $2374) for controls (Delta = $2032 (adjusted); p < 0.0001). Costs differences were larger across DR employee subgroups: DME/non-DME ($28,606/$16,363); PDR/non-PDR ($30,135/$13,445; p < 0.0001); DR with/without photocoagulation ($34,539/$16,041; p < 0.0001); and DR with/without vitrectomy ($63,933/$17,239; p < 0.0001). This study examined the incremental costs of treating DR employees, which may be higher than the incremental costs of DR itself. Some measures of diabetes severity (e.g., duration of diabetes) were not available in the claims data, and were therefore not included in the multivariate models. The cost of photocoagulation and vitrectomy procedures pertain to individuals who underwent these procedures, and not the cost of the procedures themselves. DR employees had significantly higher costs than controls, and larger differences existed within DR subgroups. Indirect costs accounted for about 20% of total cost.
Timmins, Kate A; Hulme, Claire; Cade, Janet E
2015-01-01
To describe the diet costs of adults in the National Diet and Nutrition Study (NDNS) and explore patterns in costs according to sociodemographic indicators. Cross-sectional diet diary information was matched to a database of food prices to assign a cost to each food or non-alcoholic beverage consumed. Daily diet costs were calculated, as well as costs per 10 MJ to improve comparability across differing energy requirements. Costs were compared between categories of sociodemographic variables and health behaviours. Multivariable regression assessed the effects of each variable on diet costs after adjustment. The NDNS is a rolling dietary survey, recruiting a representative UK sample each year. The study features data from 2008-2010. Adults aged 19 years or over were included. The sample consisted of 1014 participants. The geometric mean daily diet cost was £2·89 (95 % CI £2·81, £2·96). Energy intake and daily diet cost were strongly associated. The mean energy-adjusted cost was £4·09 (95 % CI £4·01, £4·18) per 10 MJ. Energy-adjusted costs differed significantly between many subgroups, including by sex and household income. Multivariable regression found significant effects of sex, qualifications and occupation (costs per 10 MJ only), as well as equivalized household income, BMI and fruit and vegetable consumption on diet costs. This is the first time that monetary costs have been applied to the diets of NDNS adults. The findings suggest that certain subgroups in the UK - for example those on lower incomes - consume diets of lower monetary value. Observed differences were mostly in the directions anticipated.
Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas
NASA Astrophysics Data System (ADS)
Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam
2017-08-01
Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
Cost of Contralateral Prophylactic Mastectomy
Deshmukh, Ashish A.; Cantor, Scott B.; Crosby, Melissa A.; Dong, Wenli; Shen, Yu; Bedrosian, Isabelle; Peterson, Susan K.; Parker, Patricia A.; Brewster, Abenaa M.
2014-01-01
Purpose To compare the health care costs of women with unilateral breast cancer who underwent contralateral prophylactic mastectomy (CPM) with those of women who did not. Methods We conducted a retrospective study of 904 women treated for stage I–III breast cancer with or without CPM. Women were matched according to age, year at diagnosis, stage, and receipt of chemotherapy. We included healthcare costs starting from the date of surgery to 24 months. We identified whether care was immediate or delayed (CPM within 6 months or 6–24 months after initial surgery, respectively). Costs were converted to approximate Medicare reimbursement values and adjusted for inflation. Multivariable regression analysis was performed to evaluate the effect of CPM on total breast cancer care costs adjusting for patient characteristics and accounting for matched pairs. Results The mean difference between the CPM and no-CPM matched groups was $3,573 (standard error [SE]=$455) for professional costs, $4,176 (SE=$1,724) for technical costs, and $7,749 (SE=$2,069) for total costs. For immediate and delayed CPM, the mean difference for total costs was $6,528 (SE =$2,243) and $16,744 (SE=$5,017), respectively. In multivariable analysis, the CPM group had a statistically significant increase of 16.9% in mean total costs compared to the no-CPM group (P<0.0001). HER-2/neu-positive status, receipt of radiation, and reconstruction were associated with increases in total costs. Conclusions CPM significantly increases short-term healthcare costs for women with unilateral breast cancer. These patient-level cost results can be used for future studies that evaluate the influence of costs of CPM on decision making. PMID:24809301
Cost comparison of transcatheter and operative closures of ostium secundum atrial septal defects
O’Byrne, Michael L.; Gillespie, Matthew J.; Shinohara, Russell T.; Dori, Yoav; Rome, Jonathan J.; Glatz, Andrew C.
2015-01-01
Background Clinical outcomes for transcatheter and operative closures of atrial septal defects (ASDs) are similar. Economic cost for each method has not been well described. Methods A single-center retrospective cohort study of children and adults <30 years of age undergoing closure for single secundum ASD from January 1, 2007, to April 1, 2012, was performed to measure differences in inflation-adjusted cost of operative and transcatheter closures of ASD. A propensity score weight-adjusted multivariate regression model was used in an intention-to-treat analysis. Costs for reintervention and crossover admissions were included in primary analysis. Results A total of 244 subjects were included in the study (64% transcatheter and 36% operative), of which 2% (n = 5) were ≥18 years. Crossover rate from transcatheter to operative group was 3%. Risk of reintervention (P = .66) and 30-day mortality (P = .37) were not significantly different. In a multivariate model, adjusted cost of operative closure was 2012 US $60,992 versus 2012 US $55,841 for transcatheter closure (P < .001). Components of total cost favoring transcatheter closure were length of stay, medications, and follow-up radiologic and laboratory testing, overcoming higher costs of procedure and echocardiography. Professional costs did not differ. The rate of 30-day readmission was greater in the operative cohort, further increasing the cost advantage of transcatheter closure. Sensitivity analyses demonstrated that costs of follow-up visits influenced relative cost but that device closure remained favorable over a broad range of crossover and reintervention rates. Conclusion For single secundum ASD, cost comparison analysis favors transcatheter closure over the short term. The cost of follow-up regimens influences the cost advantage of transcatheter closure. PMID:25965721
Fixed order dynamic compensation for multivariable linear systems
NASA Technical Reports Server (NTRS)
Kramer, F. S.; Calise, A. J.
1986-01-01
This paper considers the design of fixed order dynamic compensators for multivariable time invariant linear systems, minimizing a linear quadratic performance cost functional. Attention is given to robustness issues in terms of multivariable frequency domain specifications. An output feedback formulation is adopted by suitably augmenting the system description to include the compensator states. Either a controller or observer canonical form is imposed on the compensator description to reduce the number of free parameters to its minimal number. The internal structure of the compensator is prespecified by assigning a set of ascending feedback invariant indices, thus forming a Brunovsky structure for the nominal compensator.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Non-parametric causality detection: An application to social media and financial data
NASA Astrophysics Data System (ADS)
Tsapeli, Fani; Musolesi, Mirco; Tino, Peter
2017-10-01
According to behavioral finance, stock market returns are influenced by emotional, social and psychological factors. Several recent works support this theory by providing evidence of correlation between stock market prices and collective sentiment indexes measured using social media data. However, a pure correlation analysis is not sufficient to prove that stock market returns are influenced by such emotional factors since both stock market prices and collective sentiment may be driven by a third unmeasured factor. Controlling for factors that could influence the study by applying multivariate regression models is challenging given the complexity of stock market data. False assumptions about the linearity or non-linearity of the model and inaccuracies on model specification may result in misleading conclusions. In this work, we propose a novel framework for causal inference that does not require any assumption about a particular parametric form of the model expressing statistical relationships among the variables of the study and can effectively control a large number of observed factors. We apply our method in order to estimate the causal impact that information posted in social media may have on stock market returns of four big companies. Our results indicate that social media data not only correlate with stock market returns but also influence them.
New opportunities for future small civil turbine engines: Overviewing the GATE studies
NASA Technical Reports Server (NTRS)
Strack, W. C.
1979-01-01
An overview of four independent studies forecasts the potential impact of advanced technology turbine engines in the post 1988 market, identifies important aircraft and missions, desirable engine sizes, engine performance, and cost goals. Parametric evaluations of various engine cycles, configurations, design features, and advanced technology elements defined baseline conceptual engines for each of the important missions identified by the market analysis. Both fixed-wing and helicopter aircraft, and turboshaft, turboprop, and turbofan engines were considered. Sizable performance gains (e.g., 20% SFC decrease), and large engine cost reductions of sufficient magnitude are predicted to challenge the reciprocating engine in the 300-500 SHP class.
Minimum noise impact aircraft trajectories
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Melton, R. G.
1981-01-01
Numerical optimization is used to compute the optimum flight paths, based upon a parametric form that implicitly includes some of the problem restrictions. The other constraints are formulated as penalties in the cost function. Various aircraft on multiple trajectores (landing and takeoff) can be considered. The modular design employed allows for the substitution of alternate models of the population distribution, aircraft noise, flight paths, and annoyance, or for the addition of other features (e.g., fuel consumption) in the cost function. A reduction in the required amount of searching over local minima was achieved through use of the presence of statistical lateral dispersion in the flight paths.
Research on the Applicable Method of Valuation of Pure Electric Used vehicles
NASA Astrophysics Data System (ADS)
Cai, yun; Tan, zhengping; Wang, yidong; Mao, pan
2018-03-01
With the rapid growth in the ownership of pure electric vehicles, the research on the valuation of used electric vehicles has become the key to the development of the pure electric used vehicle market. The paper analyzed the application of the three value assessment methods, current market price method, capitalized earning method and replacement cost method, in pure electric used vehicles, and draws a conclusion that the replacement cost method is more suitable for pure electric used car. At the same time, the article also conducted a parametric correction exploration research, aiming at the characteristics of pure electric vehicles and replacement cost of the constituent factors. Through the analysis of the applicability parameters of physical devaluation, functional devaluation and economic devaluation, the revised replacement cost method can be used for the valuation of purely used electric vehicles for private use.
Hastrup, J L; Johnson, C A; Hotchkiss, A P; Kraemer, D L
1986-11-01
Fowles (1983), citing evidence from separate studies, suggests that both incentive and response cost paradigms increase heart rate and should be subsumed under Gray's (1975) 'appetitive motivational system'. Shock avoidance and loss of reward (response cost) contingencies, while aversive, appear to evoke this motivational system; consequently both should elicit heart rate increases independent of anxiety. The present investigation compared magnitude of heart rate changes observed under conditions of winning and losing money. Results showed: no differences between incentive and response cost conditions; no effect of state anxiety on heart rate in these conditions, despite an elevation of state anxiety on the task day relative to a subsequent relaxation day assessment; and some evidence for the presence under both such appetitive conditions of cardiovascular hyperresponsivity among offspring of hypertensive parents. The results suggest a need for systematic parametric studies of experimental conditions.
Costing Hospital Surgery Services: The Method Matters
Mercier, Gregoire; Naro, Gerald
2014-01-01
Background Accurate hospital costs are required for policy-makers, hospital managers and clinicians to improve efficiency and transparency. However, different methods are used to allocate direct costs, and their agreement is poorly understood. The aim of this study was to assess the agreement between bottom-up and top-down unit costs of a large sample of surgical operations in a French tertiary centre. Methods Two thousand one hundred and thirty consecutive procedures performed between January and October 2010 were analysed. Top-down costs were based on pre-determined weights, while bottom-up costs were calculated through an activity-based costing (ABC) model. The agreement was assessed using correlation coefficients and the Bland and Altman method. Variables associated with the difference between methods were identified with bivariate and multivariate linear regressions. Results The correlation coefficient amounted to 0.73 (95%CI: 0.72; 0.76). The overall agreement between methods was poor. In a multivariate analysis, the cost difference was independently associated with age (Beta = −2.4; p = 0.02), ASA score (Beta = 76.3; p<0.001), RCI (Beta = 5.5; p<0.001), staffing level (Beta = 437.0; p<0.001) and intervention duration (Beta = −10.5; p<0.001). Conclusions The ability of the current method to provide relevant information to managers, clinicians and payers is questionable. As in other European countries, a shift towards time-driven activity-based costing should be advocated. PMID:24817167
Improving the quality of pressure ulcer care with prevention: a cost-effectiveness analysis.
Padula, William V; Mishra, Manish K; Makic, Mary Beth F; Sullivan, Patrick W
2011-04-01
In October 2008, Centers for Medicare and Medicaid Services discontinued reimbursement for hospital-acquired pressure ulcers (HAPUs), thus placing stress on hospitals to prevent incidence of this costly condition. To evaluate whether prevention methods are cost-effective compared with standard care in the management of HAPUs. A semi-Markov model simulated the admission of patients to an acute care hospital from the time of admission through 1 year using the societal perspective. The model simulated health states that could potentially lead to an HAPU through either the practice of "prevention" or "standard care." Univariate sensitivity analyses, threshold analyses, and Bayesian multivariate probabilistic sensitivity analysis using 10,000 Monte Carlo simulations were conducted. Cost per quality-adjusted life-years (QALYs) gained for the prevention of HAPUs. Prevention was cost saving and resulted in greater expected effectiveness compared with the standard care approach per hospitalization. The expected cost of prevention was $7276.35, and the expected effectiveness was 11.241 QALYs. The expected cost for standard care was $10,053.95, and the expected effectiveness was 9.342 QALYs. The multivariate probabilistic sensitivity analysis showed that prevention resulted in cost savings in 99.99% of the simulations. The threshold cost of prevention was $821.53 per day per person, whereas the cost of prevention was estimated to be $54.66 per day per person. This study suggests that it is more cost effective to pay for prevention of HAPUs compared with standard care. Continuous preventive care of HAPUs in acutely ill patients could potentially reduce incidence and prevalence, as well as lead to lower expenditures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thayer, G.R.; Williamson, K.D. Jr.; Ramirez, O.
The authors compare the competitive position of peat for energy with coal, oil, and cogenerative systems in gasifiers and solid-fuel boilers. They also explore the possibility for peat use in industry. To identify the major factors, they analyze costs using a Los Alamos levelized cost code, and they study parametric costs, comparing peat production in constant dollars with interest rates and return on investment. They consider costs of processing plant construction, sizes and kinds of boilers, retrofitting, peat drying, and mining methods. They examine mining requirements for Moin, Changuinola, and El Cairo and review wet mining and dewatering methods. Peatmore » can, indeed, be competitive with other energy sources, but this depends on the ratio of fuel costs to boiler costs. This ratio is nearly constant in comparison with cogeneration in a steam-only production system. For grate boilers using Costa Rican high-ash peat, and for small nonautomatic boilers now used in Costa Rica, the authors recommend combustion tests. An appendix contains a preliminary mining plan and cost estimate for the El Cairo peat deposit. 8 refs., 43 figs., 19 tabs.« less
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
NASA Technical Reports Server (NTRS)
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Technology needs for lunar and Mars space transfer systems
NASA Technical Reports Server (NTRS)
Woodcock, Gordon R.; Cothran, Bradley C.; Donahue, Benjamin; Mcghee, Jerry
1991-01-01
The determination of appropriate space transportation technologies and operating modes is discussed with respect to both lunar and Mars missions. Three levels of activity are set forth to examine the sensitivity of transportation preferences including 'minimum,' 'full science,' and 'industrialization and settlement' categories. High-thrust-profile missions for lunar and Mars transportation are considered in terms of their relative advantages, and transportation options are defined in terms of propulsion and braking technologies. Costs and life-cycle cost estimates are prepared for the transportation preferences by using a parametric cost model, and a return-on-investment summary is given. Major technological needs for the programs are listed and include storable propulsion systems; cryogenic engines and fluids management; aerobraking; and nuclear thermal, nuclear electric, electric, and solar electric propulsion technologies.
Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage
NASA Astrophysics Data System (ADS)
Cepowski, Tomasz
2017-06-01
The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.
Memon, Aftab Hameed; Rahman, Ismail Abdul
2014-01-01
This study uncovered inhibiting factors to cost performance in large construction projects of Malaysia. Questionnaire survey was conducted among clients and consultants involved in large construction projects. In the questionnaire, a total of 35 inhibiting factors grouped in 7 categories were presented to the respondents for rating significant level of each factor. A total of 300 questionnaire forms were distributed. Only 144 completed sets were received and analysed using advanced multivariate statistical software of Structural Equation Modelling (SmartPLS v2). The analysis involved three iteration processes where several of the factors were deleted in order to make the model acceptable. The result of the analysis found that R 2 value of the model is 0.422 which indicates that the developed model has a substantial impact on cost performance. Based on the final form of the model, contractor's site management category is the most prominent in exhibiting effect on cost performance of large construction projects. This finding is validated using advanced techniques of power analysis. This vigorous multivariate analysis has explicitly found the significant category which consists of several causative factors to poor cost performance in large construction projects. This will benefit all parties involved in construction projects for controlling cost overrun. PMID:24693227
Memon, Aftab Hameed; Rahman, Ismail Abdul
2014-01-01
This study uncovered inhibiting factors to cost performance in large construction projects of Malaysia. Questionnaire survey was conducted among clients and consultants involved in large construction projects. In the questionnaire, a total of 35 inhibiting factors grouped in 7 categories were presented to the respondents for rating significant level of each factor. A total of 300 questionnaire forms were distributed. Only 144 completed sets were received and analysed using advanced multivariate statistical software of Structural Equation Modelling (SmartPLS v2). The analysis involved three iteration processes where several of the factors were deleted in order to make the model acceptable. The result of the analysis found that R(2) value of the model is 0.422 which indicates that the developed model has a substantial impact on cost performance. Based on the final form of the model, contractor's site management category is the most prominent in exhibiting effect on cost performance of large construction projects. This finding is validated using advanced techniques of power analysis. This vigorous multivariate analysis has explicitly found the significant category which consists of several causative factors to poor cost performance in large construction projects. This will benefit all parties involved in construction projects for controlling cost overrun.
Impact of aggressive management and palliative care on cancer costs in the final month of life.
Cheung, Matthew C; Earle, Craig C; Rangrej, Jagadish; Ho, Thi H; Liu, Ning; Barbera, Lisa; Saskin, Refik; Porter, Joan; Seung, Soo Jin; Mittmann, Nicole
2015-09-15
A significant share of the cost of cancer care is concentrated in the end-of-life period. Although quality measures of aggressive treatment may guide optimal care during this timeframe, little is known about whether these metrics affect costs of care. This study used population data to identify a cohort of patients who died of cancer in Ontario, Canada (2005-2009). Individuals were categorized as having received or having not received aggressive end-of-life care according to quality measures related to acute institutional care or chemotherapy administration in the end-of-life period. Costs (2009 Canadian dollars) were collected over the last month of life through the linkage of health system administrative databases. Multivariate quantile regression was used to identify predictors of increased costs. Among 107,253 patients, the mean per-patient cost over the final month was $18,131 for patients receiving aggressive care and $12,678 for patients receiving nonaggressive care (P < .0001). Patients who received chemotherapy in the last 2 weeks of life also sustained higher costs than those who did not (P < .0001). For individuals receiving end-of-life care in the highest cost quintile, early and repeated palliative care consultation was associated with reduced mean per-patient costs. In a multivariate analysis, chemotherapy in the 2 weeks of life remained predictive of increased costs (median increase, $536; P < .0001), whereas access to palliation remained predictive for lower costs (median decrease, $418; P < .0001). Cancer patients who receive aggressive end-of-life care incur 43% higher costs than those managed nonaggressively. Palliative consultation may partially offset these costs and offer resultant savings. © 2015 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments
2016-01-07
methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Adaptive multibeam phased array design for a Spacelab experiment
NASA Technical Reports Server (NTRS)
Noji, T. T.; Fass, S.; Fuoco, A. M.; Wang, C. D.
1977-01-01
The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio.
Propulsion Study for Small Transport Aircraft Technology (STAT)
NASA Technical Reports Server (NTRS)
Gill, J. C.; Earle, R. V.; Staton, D. V.; Stolp, P. C.; Huelster, D. S.; Zolezzi, B. A.
1980-01-01
Propulsion requirements were determined for 0.5 and 0.7 Mach aircraft. Sensitivity studies were conducted on both these aircraft to determine parametrically the influence of propulsion characteristics on aircraft size and direct operating cost (DOC). Candidate technology elements and design features were identified and parametric studies conducted to select the STAT advanced engine cycle. Trade off studies were conducted to determine those advanced technologies and design features that would offer a reduction in DOC for operation of the STAT engines. These features were incorporated in the two STAT engines. A benefit assessment was conducted comparing the STAT engines to current technology engines of the same power and to 1985 derivatives of the current technology engines. Research and development programs were recommended as part of an overall technology development plan to ensure that full commercial development of the STAT engines could be initiated in 1988.
NASA Astrophysics Data System (ADS)
Zhou, J. X.; Zhang, L.
2005-01-01
Incremental harmonic balance (IHB) formulations are derived for general multiple degrees of freedom (d.o.f.) non-linear autonomous systems. These formulations are developed for a concerned four-d.o.f. aircraft wheel shimmy system with combined Coulomb and velocity-squared damping. A multi-harmonic analysis is performed and amplitudes of limit cycles are predicted. Within a large range of parametric variations with respect to aircraft taxi velocity, the IHB method can, at a much cheaper cost, give results with high accuracy as compared with numerical results given by a parametric continuation method. In particular, the IHB method avoids the stiff problems emanating from numerical treatment of aircraft wheel shimmy system equations. The development is applicable to other vibration control systems that include commonly used dry friction devices or velocity-squared hydraulic dampers.
Parametric study on laminar flow for finite wings at supersonic speeds
NASA Technical Reports Server (NTRS)
Garcia, Joseph Avila
1994-01-01
Laminar flow control has been identified as a key element in the development of the next generation of High Speed Transports. Extending the amount of laminar flow over an aircraft will increase range, payload, and altitude capabilities as well as lower fuel requirements, skin temperature, and therefore the overall cost. A parametric study to predict the extent of laminar flow for finite wings at supersonic speeds was conducted using a computational fluid dynamics (CFD) code coupled with a boundary layer stability code. The parameters investigated in this study were Reynolds number, angle of attack, and sweep. The results showed that an increase in angle of attack for specific Reynolds numbers can actually delay transition. Therefore, higher lift capability, caused by the increased angle of attack, as well as a reduction in viscous drag, due to the delay in transition, can be expected simultaneously. This results in larger payload and range.
Silva, R; Dow, P; Dubay, R; Lissandrello, C; Holder, J; Densmore, D; Fiering, J
2017-09-01
Acoustic manipulation has emerged as a versatile method for microfluidic separation and concentration of particles and cells. Most recent demonstrations of the technology use piezoelectric actuators to excite resonant modes in silicon or glass microchannels. Here, we focus on acoustic manipulation in disposable, plastic microchannels in order to enable a low-cost processing tool for point-of-care diagnostics. Unfortunately, the performance of resonant acoustofluidic devices in plastic is hampered by a lack of a predictive model. In this paper, we build and test a plastic blood-bacteria separation device informed by a design of experiments approach, parametric rapid prototyping, and screening by image-processing. We demonstrate that the new device geometry can separate bacteria from blood while operating at 275% greater flow rate as well as reduce the power requirement by 82%, while maintaining equivalent separation performance and resolution when compared to the previously published plastic acoustofluidic separation device.
User data dissemination concepts for earth resources
NASA Technical Reports Server (NTRS)
Davies, R.; Scott, M.; Mitchell, C.; Torbett, A.
1976-01-01
Domestic data dissemination networks for earth-resources data in the 1985-1995 time frame were evaluated. The following topics were addressed: (1) earth-resources data sources and expected data volumes, (2) future user demand in terms of data volume and timeliness, (3) space-to-space and earth point-to-point transmission link requirements and implementation, (4) preprocessing requirements and implementation, (5) network costs, and (6) technological development to support this implementation. This study was parametric in that the data input (supply) was varied by a factor of about fifteen while the user request (demand) was varied by a factor of about nineteen. Correspondingly, the time from observation to delivery to the user was varied. This parametric evaluation was performed by a computer simulation that was based on network alternatives and resulted in preliminary transmission and preprocessing requirements. The earth-resource data sources considered were: shuttle sorties, synchronous satellites (e.g., SEOS), aircraft, and satellites in polar orbits.
Stey, Anne M; Brook, Robert H; Needleman, Jack; Hall, Bruce L; Zingmond, David S; Lawson, Elise H; Ko, Clifford Y
2015-02-01
This study aims to describe the magnitude of hospital costs among patients undergoing elective colectomy, cholecystectomy, and pancreatectomy, determine whether these costs relate as expected to duration of care, patient case-mix severity and comorbidities, and whether risk-adjusted costs vary significantly by hospital. Correctly estimating the cost of production of surgical care may help decision makers design mechanisms to improve the efficiency of surgical care. Patient data from 202 hospitals in the ACS-NSQIP were linked to Medicare inpatient claims. Patient charges were mapped to cost center cost-to-charge ratios in the Medicare cost reports to estimate costs. The association of patient case-mix severity and comorbidities with cost was analyzed using mixed effects multivariate regression. Cost variation among hospitals was quantified by estimating risk-adjusted hospital cost ratios and 95% confidence intervals from the mixed effects multivariate regression. There were 21,923 patients from 202 hospitals who underwent an elective colectomy (n = 13,945), cholecystectomy (n = 5,569), or pancreatectomy (n = 2,409). Median cost was lowest for cholecystectomy ($15,651) and highest for pancreatectomy ($37,745). Room and board costs accounted for the largest proportion (49%) of costs and were correlated with length of stay, R = 0.89, p < 0.001. The patient case-mix severity and comorbidity variables most associated with cost were American Society of Anesthesiologists (ASA) class IV (estimate 1.72, 95% CI 1.57 to 1.87) and fully dependent functional status (estimate 1.63, 95% CI 1.53 to 1.74). After risk-adjustment, 66 hospitals had significantly lower costs than the average hospital and 57 hospitals had significantly higher costs. The hospital costs estimates appear to be consistent with clinical expectations of hospital resource use and differ significantly among 202 hospitals after risk-adjustment for preoperative patient characteristics and procedure type. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
Robinson, James C; Brown, Timothy T
2014-09-01
To quantify the potential reduction in hospital costs from adoption of best local practices in supply chain management and discharge planning. We performed multivariate statistical analyses of the association between total variable cost per procedure and medical device price and length of stay, controlling for patient and hospital characteristics. Ten hospitals in 1 major metropolitan area supplied patient-level administrative data on 9778 patients undergoing joint replacement, spine fusion, or cardiac rhythm management (CRM) procedures in 2008 and 2010. The impact on each hospital of matching lowest local market device prices and lowest patient length of stay (LOS) was calculated using multivariate regression analysis controlling for patient demographics, diagnoses, comorbidities, and implications. Average variable costs ranged from $11,315 for joint replacement to $16,087 for CRM and $18,413 for spine fusion. Implantable medical devices accounted for a large share of each procedure's variable costs: 44% for joint replacement, 39% for spine fusion, and 59% for CRM. Device prices and patient length-of-stay exhibited wide variation across hospitals. Total potential hospital cost savings from achieving best local practices in device prices and patient length of stay are 14.5% for joint replacement, 18.8% for spine fusion;,and 29.1% for CRM. Hospitals have opportunities for cost reduction from adoption of best local practices in supply chain management and discharge planning.
Potentialities of TEC topping: A simplified view of parametric effects
NASA Technical Reports Server (NTRS)
Morris, J. F.
1980-01-01
An examination of the benefits of thermionic-energy-conversion (TEC)-topped power plants and methods of increasing conversion efficiency are discussed. Reductions in the cost of TEC modules yield direct decreases in the cost of electricity (COE) from TEC-topped central station power plants. Simplified COE, overall-efficiency charts presented illustrate this trend. Additional capital-cost diminution results from designing more compact furnaces with considerably increased heat transfer rates allowable and desirable for high temperature TEC and heat pipes. Such improvements can evolve of the protection from hot corrosion and slag as well as the thermal expansion compatibilities offered by silicon-carbide clads on TEC-heating surfaces. Greater efficiencies and far fewer modules are possible with high-temperature, high-power-density TEC: This decreases capital and fuel costs much more and substantially increases electric power outputs for fixed fuel inputs. In addition to more electricity, less pollution, and lower costs, TEC topping used directly in coal-combustion products contributes balance-of-payment gains.
Application of Risk within Net Present Value Calculations for Government Projects
NASA Technical Reports Server (NTRS)
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
Improving multivariate Horner schemes with Monte Carlo tree search
NASA Astrophysics Data System (ADS)
Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.
2013-11-01
Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.
Hospital costs of nosocomial multi-drug resistant Pseudomonas aeruginosa acquisition
2012-01-01
Background We aimed to assess the hospital economic costs of nosocomial multi-drug resistant Pseudomonas aeruginosa acquisition. Methods A retrospective study of all hospital admissions between January 1, 2005, and December 31, 2006 was carried out in a 420-bed, urban, tertiary-care teaching hospital in Barcelona (Spain). All patients with a first positive clinical culture for P. aeruginosa more than 48 h after admission were included. Patient and hospitalization characteristics were collected from hospital and microbiology laboratory computerized records. According to antibiotic susceptibility, isolates were classified as non-resistant, resistant and multi-drug resistant. Cost estimation was based on a full-costing cost accounting system and on the criteria of clinical Activity-Based Costing methods. Multivariate analyses were performed using generalized linear models of log-transformed costs. Results Cost estimations were available for 402 nosocomial incident P. aeruginosa positive cultures. Their distribution by antibiotic susceptibility pattern was 37.1% non-resistant, 29.6% resistant and 33.3% multi-drug resistant. The total mean economic cost per admission of patients with multi-drug resistant P. aeruginosa strains was higher than that for non-resistant strains (15,265 vs. 4,933 Euros). In multivariate analysis, resistant and multi-drug resistant strains were independently predictive of an increased hospital total cost in compared with non-resistant strains (the incremental increase in total hospital cost was more than 1.37-fold and 1.77-fold that for non-resistant strains, respectively). Conclusions P. aeruginosa multi-drug resistance independently predicted higher hospital costs with a more than 70% increase per admission compared with non-resistant strains. Prevention of the nosocomial emergence and spread of antimicrobial resistant microorganisms is essential to limit the strong economic impact. PMID:22621745
Hospital costs of nosocomial multi-drug resistant Pseudomonas aeruginosa acquisition.
Morales, Eva; Cots, Francesc; Sala, Maria; Comas, Mercè; Belvis, Francesc; Riu, Marta; Salvadó, Margarita; Grau, Santiago; Horcajada, Juan P; Montero, Maria Milagro; Castells, Xavier
2012-05-23
We aimed to assess the hospital economic costs of nosocomial multi-drug resistant Pseudomonas aeruginosa acquisition. A retrospective study of all hospital admissions between January 1, 2005, and December 31, 2006 was carried out in a 420-bed, urban, tertiary-care teaching hospital in Barcelona (Spain). All patients with a first positive clinical culture for P. aeruginosa more than 48 h after admission were included. Patient and hospitalization characteristics were collected from hospital and microbiology laboratory computerized records. According to antibiotic susceptibility, isolates were classified as non-resistant, resistant and multi-drug resistant. Cost estimation was based on a full-costing cost accounting system and on the criteria of clinical Activity-Based Costing methods. Multivariate analyses were performed using generalized linear models of log-transformed costs. Cost estimations were available for 402 nosocomial incident P. aeruginosa positive cultures. Their distribution by antibiotic susceptibility pattern was 37.1% non-resistant, 29.6% resistant and 33.3% multi-drug resistant. The total mean economic cost per admission of patients with multi-drug resistant P. aeruginosa strains was higher than that for non-resistant strains (15,265 vs. 4,933 Euros). In multivariate analysis, resistant and multi-drug resistant strains were independently predictive of an increased hospital total cost in compared with non-resistant strains (the incremental increase in total hospital cost was more than 1.37-fold and 1.77-fold that for non-resistant strains, respectively). P. aeruginosa multi-drug resistance independently predicted higher hospital costs with a more than 70% increase per admission compared with non-resistant strains. Prevention of the nosocomial emergence and spread of antimicrobial resistant microorganisms is essential to limit the strong economic impact.
Carney, Patricia I; Yao, Jianying; Lin, Jay; Law, Amy
2017-05-01
This study evaluated healthcare costs of index procedures and during a 6-month follow-up of women who had hysteroscopic sterilization (HS) versus laparoscopic bilateral tubal ligation (LBTL). Women (18-49 years) with claims for HS and LBTL procedures were identified from the MarketScan commercial claims database (January 1, 2010, to December 31, 2012) and placed into separate cohorts. Demographics, characteristics, index procedure costs, and 6-month total healthcare costs and sterilization procedure-related costs were compared. Multivariable regression analyses were used to examine the impact of HS versus LBTL on costs. Among the study population, 12,031 had HS (mean age: 37.0 years) and 7286 had LBTL (mean age: 35.8 years). The majority (80.9%) who had HS underwent the procedure in a physician's office setting. Fewer women who had HS versus LBTL received the procedure in an inpatient setting (0.5% vs. 2.1%), an ambulatory surgical center setting (5.0% vs. 23.8%), or a hospital outpatient setting (13.4% vs. 71.9%). Mean total cost for the index sterilization procedure was lower for HS than for LBTL ($3964 vs. $5163, p < 0.0001). During the 6-month follow-up, total medical and prescription costs for all causes ($7093 vs. $7568, p < 0.0001) and sterilization procedure-related costs ($4971 vs. $5407, p < 0.0001) were lower for women who had HS versus LBTL. Multivariable regression results confirmed that costs were lower for women who had HS versus LBTL. Among commercially insured women in the United States, HS versus LBTL is associated with lower average costs for the index procedure and lower total healthcare and procedure-related costs during 6 months after the sterilization procedure.
Power enhancement via multivariate outlier testing with gene expression arrays.
Asare, Adam L; Gao, Zhong; Carey, Vincent J; Wang, Richard; Seyfert-Margolis, Vicki
2009-01-01
As the use of microarrays in human studies continues to increase, stringent quality assurance is necessary to ensure accurate experimental interpretation. We present a formal approach for microarray quality assessment that is based on dimension reduction of established measures of signal and noise components of expression followed by parametric multivariate outlier testing. We applied our approach to several data resources. First, as a negative control, we found that the Affymetrix and Illumina contributions to MAQC data were free from outliers at a nominal outlier flagging rate of alpha=0.01. Second, we created a tunable framework for artificially corrupting intensity data from the Affymetrix Latin Square spike-in experiment to allow investigation of sensitivity and specificity of quality assurance (QA) criteria. Third, we applied the procedure to 507 Affymetrix microarray GeneChips processed with RNA from human peripheral blood samples. We show that exclusion of arrays by this approach substantially increases inferential power, or the ability to detect differential expression, in large clinical studies. http://bioconductor.org/packages/2.3/bioc/html/arrayMvout.html and http://bioconductor.org/packages/2.3/bioc/html/affyContam.html affyContam (credentials: readonly/readonly)
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Multivariate Analysis of Conformational Changes Induced by Macromolecular Interactions
NASA Astrophysics Data System (ADS)
Mitra, Indranil; Alexov, Emil
2009-11-01
Understanding protein-protein binding and associated conformational changes is critical for both understanding thermodynamics of protein interactions and successful drug discovery. Our study focuses on computational analysis of plausible correlations between induced conformational changes and set of biophysical characteristics of interacting monomers. It was done by comparing 3D structures of unbound and bound monomers to calculate the RMSD which is used as measure of the structural changed induced by the binding. We correlate RMSD with volumetric and interfacial charge of the monomers, the amino acid composition, the energy of binding, and type of amino acids at the interface. as predictors. The data set was analyzed with SVM in R & SPSS which is trained on a combination of a new robust evolutionary conservation signal with the monomeric properties to predict the induced RMSD. The goal of this study is to undergo parametric tests and heirchiacal cluster and discriminant multivariate analysis to find key predictors which will be used to develop algorithm to predict the magnitude of conformational changes provided by the structure of interacting monomers. Results indicate that the most promising predictor is the net charge of the monomers, however, other parameters as the type of amino acids at the interface have significant contribution as well.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
Quantitative evaluation of a thrust vector controlled transport at the conceptual design phase
NASA Astrophysics Data System (ADS)
Ricketts, Vincent Patrick
The impetus to innovate, to push the bounds and break the molds of evolutionary design trends, often comes from competition but sometimes requires catalytic political legislature. For this research endeavor, the 'catalyzing legislation' comes in response to the rise in cost of fossil fuels and the request put forth by NASA on aircraft manufacturers to show reduced aircraft fuel consumption of +60% within 30 years. This necessitates that novel technologies be considered to achieve these values of improved performance. One such technology is thrust vector control (TVC). The beneficial characteristic of thrust vector control technology applied to the traditional tail-aft configuration (TAC) commercial transport is its ability to retain the operational advantage of this highly evolved aircraft type like cabin evacuation, ground operation, safety, and certification. This study explores if the TVC transport concept offers improved flight performance due to synergistically reducing the traditional empennage size, overall resulting in reduced weight and drag, and therefore reduced aircraft fuel consumption. In particular, this study explores if the TVC technology in combination with the reduced empennage methodology enables the TAC aircraft to synergistically evolve while complying with current safety and certification regulation. This research utilizes the multi-disciplinary parametric sizing software, AVD Sizing, developed by the Aerospace Vehicle Design (AVD) Laboratory. The sizing software is responsible for visualizing the total system solution space via parametric trades and is capable of determining if the TVC technology can enable the TAC aircraft to synergistically evolve, showing marked improvements in performance and cost. This study indicates that the TVC plus reduced empennage methodology shows marked improvements in performance and cost.
Radcliff, Tiffany A.; Bobroff, Linda B.; Lutes, Lesley D.; Durning, Patricia E.; Daniels, Michael J.; Limacher, Marian C.; Janicke, David M.; Martin, A. Daniel; Perri, Michael G.
2012-01-01
Background A major challenge following successful weight loss is continuing the behaviors required for long-term weight maintenance. This challenge may be exacerbated in rural areas with limited local support resources. Objective This study describes and compares program costs and cost-effectiveness for 12-month extended care lifestyle maintenance programs following an initial 6-month weight loss program. Design A 1-year prospective controlled randomized clinical trial. Participants/Setting The study included 215 female participants age 50 or older from rural areas who completed an initial 6-month lifestyle program for weight loss. The study was conducted from June 1, 2003, to May 31, 2007. Intervention The intervention was delivered through local Cooperative Extension Service offices in rural Florida. Participants were randomly-assigned to a 12-month extended care program using either individual telephone counseling (n=67), group face-to-face counseling (n=74), or a mail/control group (n=74). Main Outcome Measures Program delivery costs, weight loss, and self-reported health status were directly assessed through questionnaires and program activity logs. Costs were estimated across a range of enrollment sizes to allow inferences beyond the study sample. Statistical Analyses Performed Non-parametric and parametric tests of differences across groups for program outcomes were combined with direct program cost estimates and expected value calculations to determine which scales of operation favored alternative formats for lifestyle maintenance. Results Median weight regain during the intervention year was 1.7 kg for participants in the face-to-face format, 2.1 kg for the telephone format, and 3.1 kg for the mail/control format. For a typical group size of 13 participants, the face-to-face format had higher fixed costs, which translated into higher overall program costs ($420 per participant) when compared to individual telephone counseling ($268 per participant) and control ($226 per participant) programs. While the net weight lost after the 12-month maintenance program was higher for the face-to-face and telephone programs compared to the control group, the average cost per expected kilogram of weight lost was higher for the face-to-face program ($47/kg) compared to the other two programs (approximately $33/kg for telephone and control). Conclusions Both the scale of operations and local demand for programs are important considerations in selecting a delivery format for lifestyle maintenance. In this study, the telephone format had a lower cost, but similar outcomes compared to the face-to-face format. PMID:22818246
Botteman, M F; Meijboom, M; Foley, I; Stephens, J M; Chen, Y M; Kaura, S
2011-12-01
The use of zoledronic acid (ZOL) has recently been shown to significantly reduce the risk of new skeletal-related events (SREs) in renal cell carcinoma (RCC) patients with bone metastases. The present exploratory study assessed the cost-effectiveness of ZOL in this population, adopting a French, German, and United Kingdom (UK) government payer perspective. This cost-effectiveness model was based on a post hoc retrospective analysis of a subset of patients with RCC who were included in a larger randomized clinical trial of patients with bone metastases secondary to a variety of cancers. In the trial, patients were randomized to receive ZOL (n = 27) or placebo (n = 19) with concomitant antineoplastic therapy every 3 weeks for 9 months (core study) plus 12 months during a study extension. Since the trial did not collect costs or data on the quality-adjusted life years (QALYs) of the patients, these outcomes had to be assumed via modeling exercises. The costs of SREs were estimated using hospital DRG tariffs. These estimates were supplemented with literature-based costs where possible. Drug, administration, and supply costs were obtained from published and internet sources. Consistent with similar economic analyses, patients were assumed to experience quality of life decrements lasting 1 month for each SRE. Uncertainty surrounding outcomes was addressed via multivariate sensitivity analyses. Patients receiving ZOL experienced 1.07 fewer SREs than patients on placebo. Patients on ZOL experienced a gain in discounted QALYs of approximately 0.1563 in France and Germany and 0.1575 in the UK. Discounted SRE-related costs were substantially lower among ZOL than placebo patients (-€ 4,196 in France, - € 3,880 in Germany, and -€ 3,355 in the UK). After taking into consideration the drug therapy costs, ZOL saved € 1,358, € 1,223, and € 719 in France, Germany, and the UK, respectively. In the multivariate sensitivity analyses, therapy with ZOL saved costs in 67-77% of simulations, depending on the country. The cost per QALY gained for ZOL versus placebo was below € 30,000 per QALY gained threshold in approximately 93-94% of multivariate sensitivity analyses simulations. The present analysis suggests that ZOL saves costs and increases QALYs compared to placebo in French, German, and UK RCC patients with bone metastases. Additional prospective research may be needed to confirm these results in a larger sample of patients.
Patel, Anik R; Kessler, Jason; Braithwaite, R Scott; Nucifora, Kimberly A; Thirumurthy, Harsha; Zhou, Qinlian; Lester, Richard T; Marra, Carlo A
2017-02-01
A surge in mobile phone availability has fueled low cost short messaging service (SMS) adherence interventions. Multiple systematic reviews have concluded that some SMS-based interventions are effective at improving antiretroviral therapy (ART) adherence, and they are hypothesized to improve retention in care. The objective of this study was to evaluate the cost-effectiveness of SMS-based adherence interventions and explore the added value of retention benefits. We evaluated the cost-effectiveness of weekly SMS interventions compared to standard care among HIV+ individuals initiating ART for the first time in Kenya. We used an individual level micro-simulation model populated with data from two SMS-intervention trials, an East-African HIV+ cohort and published literature. We estimated average quality adjusted life years (QALY) and lifetime HIV-related costs from a healthcare perspective. We explored a wide range of scenarios and assumptions in one-way and multivariate sensitivity analyses. We found that SMS-based adherence interventions were cost-effective by WHO standards, with an incremental cost-effectiveness ratio (ICER) of $1,037/QALY. In the secondary analysis, potential retention benefits improved the cost-effectiveness of SMS intervention (ICER = $864/QALY). In multivariate sensitivity analyses, the interventions remained cost-effective in most analyses, but the ICER was highly sensitive to intervention costs, effectiveness and average cohort CD4 count at ART initiation. SMS interventions remained cost-effective in a test and treat scenario where individuals were assumed to initiate ART upon HIV detection. Effective SMS interventions would likely increase the efficiency of ART programs by improving HIV treatment outcomes at relatively low costs, and they could facilitate achievement of the UNAIDS goal of 90% viral suppression among those on ART by 2020.
Costs of treatment and complications of adult type 1 diabetes.
Franciosi, M; Lucisano, G; Amoretti, R; Capani, F; Bruttomesso, D; Di Bartolo, P; Girelli, A; Leonetti, F; Morviducci, L; Vitacolonna, E; Nicolucci, A
2013-07-01
Costs associated with diabetes represent a large burden for patients and the health-care system. However, few studies examined the costs for diabetes treatment in adults with type 1 diabetes (T1DM). This analysis was aimed to assess the costs of treatment associated with T1DM among adults in Italy from the national health-care system perspective. Data were collected using a questionnaire assessing resource consumption retrospectively (drugs, visits, diagnostics, hospitalisations and self-monitoring of blood glucose (SMBG)). One-year costs were calculated for the 12 months preceding the survey. Cost estimation, referred to 2006, was carried out using univariate and multivariate Poisson regression models. Fifty-eight centres enrolled 1193 patients (49.5% women; aged between 18 and 55 years, average diabetes duration was 16.1 ± 9.8 years). The average annual cost for an adult patient with TDM1 was € 2450 (95% confidence interval (CI): 2358-2544). Insulin therapy and SMBG accounted together for 71.2% of total costs (35.6% and 35.6%, respectively); the remainder was shared by hospitalisations (18%), visits (4.0%), diagnostics (3.9%) and other drugs (2.9%). Univariate analyses showed that the presence of complications was associated with excess of costs, mainly related to the hospitalisation and drugs. Multivariate analyses confirmed these results showing that the presence of micro-vascular plus macrovascular complications doubles the cost of treatment. Strategies of care for T1DM that can improve disease management and prevent or delay the onset of complications could represent the most important tool to reduce costs in the long term while improving clinical outcomes and quality of life. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.
1977-01-01
A preliminary assessment of vibroacoustic test plan optimization for free flyer STS payloads is presented and the effects on alternate test plans for Spacelab sortie payloads number of missions are also examined. The component vibration failure probability and the number of components in the housekeeping subassemblies are provided. Decision models are used to evaluate the cost effectiveness of seven alternate test plans using protoflight hardware.
NASA Technical Reports Server (NTRS)
1972-01-01
The conceptual designs of four useful tilt-rotor aircraft for the 1975 to 1980 time period are presented. Parametric studies leading to design point selection are described, and the characteristics and capabilities of each configuration are presented. An assessment is made of current technology status, and additional tilt-rotor research programs are recommended to minimize the time, cost, and risk of development of these vehicles.
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Non-Parametric Model Drift Detection
2016-07-01
de,la,n,o,the,m, facto ,et,des,of Group num: 42, TC(X;Y_j): 0.083 42:resolution,council,resolutions,draft,recalling,pursuant,reaffirming,sponsors...Y_j): 0.019 80: posts ,cost, post ,expenditure,overall,infrastructure,expected,operational,external, savings Group num: 81, TC(X;Y_j): 0.018 81...90, TC(X;Y_j): 0.014 90:its,expresses,mandate,reiterates,appreciation,expressing,endorsed,reiterated, ex peditiously,literature Group num: 91, TC(X
Hurtaud, Aline; Donnadieu, Anne; Escalup, Laurence; Cottu, Paul H; Baffert, Sandrine
2016-12-01
There is no standard recommendation for metastatic breast cancer treatment (MBC) after two chemotherapy regimens. Eribulin (Halaven ® ) has shown a significant improvement in overall survival (OS) in this setting. Its use may however be hampered by its cost, which is up to three times the cost of other standard drugs. We report the clinical outcomes and health care costs of a large series of consecutive MBC patients treated with Eribulin. A monocentric retrospective study was conducted at Institut Curie over 1 year (August 2012 to August 2013). Data from patient's medical records were extracted to estimate treatment and outcome patterns, and direct medical costs until the end of treatment were measured. Factors affecting cost variability were identified by multiple linear regressions and factors linked to OS by a multivariate Cox model. We included 87 MBC patients. The median OS was 10.7 months (95%CI = 8.0-13.3). By multivariate Cox analysis, independent factors of poor prognosis were an Eastern Cooperative Oncology Group (ECOG) performance status of 3, a number of metastatic sites ≥ 4 and the need for hospitalization. Per-patient costs during whole treatment were €18,694 [CI 95%: 16,028-21,360], and €2581 [CI 95%: 2226-3038] per month. Eribulin administration contributed to 79% of per-patient costs. Innovative and expensive drugs often appear to be the main cost drivers in cancer treatment, particularly for MBC. There is an urgent need to assess clinical practice benefits. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Baker, Dorothy I.; Bice, Thomas W.
1995-01-01
A retrospective cohort design is used to estimate the effect of urinary incontinence (UI) on the public costs of home care services to elderly individuals. Multivariate analyses controlling for other individual, household, and supply characteristics demonstrate that those with UI generate significantly greater public costs for home care services.…
Heilmann, Romy M; Grellet, Aurélien; Grützner, Niels; Cranford, Shannon M; Suchodolski, Jan S; Chastant-Maillard, Sylvie; Steiner, Jörg M
2018-04-17
Previous data suggest that fecal S100A12 has clinical utility as a biomarker of chronic gastrointestinal inflammation (idiopathic inflammatory bowel disease) in both people and dogs, but the effect of gastrointestinal pathogens on fecal S100A12 concentrations is largely unknown. The role of S100A12 in parasite and viral infections is also difficult to study in traditional animal models due to the lack of S100A12 expression in rodents. Thus, the aim of this study was to evaluate fecal S100A12 concentrations in a cohort of puppies with intestinal parasites (Cystoisospora spp., Toxocara canis, Giardia sp.) and viral agents that are frequently encountered and known to cause gastrointestinal signs in dogs (coronavirus, parvovirus) as a comparative model. Spot fecal samples were collected from 307 puppies [median age (range): 7 (4-13) weeks; 29 different breeds] in French breeding kennels, and fecal scores (semiquantitative system; scores 1-13) were assigned. Fecal samples were tested for Cystoisospora spp. (C. canis and C. ohioensis), Toxocara canis, Giardia sp., as well as canine coronavirus (CCV) and parvovirus (CPV). S100A12 concentrations were measured in all fecal samples using an in-house radioimmunoassay. Statistical analyses were performed using non-parametric 2-group or multiple-group comparisons, non-parametric correlation analysis, association testing between nominal variables, and construction of a multivariate mixed model. Fecal S100A12 concentrations ranged from < 24-14,363 ng/g. Univariate analysis only showed increased fecal S100A12 concentrations in dogs shedding Cystoisospora spp. (P = 0.0384) and in dogs infected with parvovirus (P = 0.0277), whereas dogs infected with coronavirus had decreased fecal S100A12 concentrations (P = 0.0345). However, shedding of any single enteropathogen did not affect fecal S100A12 concentrations in multivariate analysis (all P > 0.05) in this study. Only fecal score and breed size had an effect on fecal S100A12 concentrations in multivariate analysis (P < 0.0001). An infection with any single enteropathogen tested in this study is unlikely to alter fecal S100A12 concentrations, and these preliminary data are important for further studies evaluating fecal S100A12 concentrations in dogs or when using fecal S100A12 concentrations as a biomarker in patients with chronic idiopathic gastrointestinal inflammation.
Hyponatremia in Guillain-Barré Syndrome.
Rumalla, Kavelin; Reddy, Adithi Y; Letchuman, Vijay; Mittal, Manoj K
2017-06-01
To evaluate incidence, risk factors, and in-hospital outcomes associated with hyponatremia in patients hospitalized for Guillain-Barré Syndrome (GBS). We identified adult patients with GBS in the Nationwide Inpatient Sample (2002-2011). Univariate and multivariable analyses were used. Among 54,778 patients hospitalized for GBS, the incidence of hyponatremia was 11.8% (compared with 4.0% in non-GBS patients) and increased from 6.9% in 2002 to 13.5% in 2011 (P < 0.0001). Risk factors associated with hyponatremia in multivariable analysis included advanced age, deficiency anemia, alcohol abuse, hypertension, and intravenous immunoglobulin (all P < 0.0001). Hyponatremia was associated with prolonged length of stay (16.07 vs. 10.41, days), increased costs (54,001 vs. 34,125, $USD), and mortality (20.5% vs. 11.6%) (all P < 0.0001). In multivariable analysis, hyponatremia was independently associated with adverse discharge disposition (odds ratio: 2.07, 95% confidence interval, 1.91-2.25, P < 0.0001). Hyponatremia is prevalent in GBS and is detrimental to patient-centered outcomes and health care costs. Sodium levels should be carefully monitored in high-risk patients.
Analysis and assessment of STES technologies
NASA Astrophysics Data System (ADS)
Brown, D. R.; Blahnik, D. E.; Huber, H. D.
1982-12-01
Technical and economic assessments completed in FY 1982 in support of the Seasonal Thermal Energy Storage (STES) segment of the Underground Energy Storage Program included: (1) a detailed economic investigation of the cost of heat storage in aquifers, (2) documentation for AQUASTOR, a computer model for analyzing aquifer thermal energy storage (ATES) coupled with district heating or cooling, and (3) a technical and economic evaluation of several ice storage concepts. This paper summarizes the research efforts and main results of each of these three activities. In addition, a detailed economic investigation of the cost of chill storage in aquifers is currently in progress. The work parallels that done for ATES heat storage with technical and economic assumptions being varied in a parametric analysis of the cost of ATES delivered chill. The computer model AQUASTOR is the principal analytical tool being employed.
An economics systems analysis of land mobile radio telephone services
NASA Technical Reports Server (NTRS)
Leroy, B. E.; Stevenson, S. M.
1980-01-01
The economic interaction of the terrestrial and satellite systems is considered. Parametric equations are formulated to allow examination of necessary user thresholds and growth rates as a function of system costs. Conversely, first order allowable systems costs are found as a function of user thresholds and growth rates. Transitions between satellite and terrestrial service systems are examined. User growth rate density (user/year/sq km) is shown to be a key parameter in the analysis of systems compatibility. The concept of system design matching the price/demand curves is introduced and examples are given. The role of satellite systems is critically examined and the economic conditions necessary for the introduction of satellite service are identified.
Johnston, Stephen S; Juday, Timothy; Esker, Stephen; Espindle, Derek; Chu, Bong-Chul; Hebden, Tony; Uy, Jonathan
2013-01-01
This is the first study to compare the incidence and health care costs of medically attended adverse effects in atazanavir- and darunavir-based antiretroviral therapy (ART) among U.S. Medicaid patients receiving routine HIV care. This was a retrospective study using Medicaid administrative health care claims from 15 states. Subjects were HIV patients aged 18 to 64 years initiating atazanavir- or darunavir-based ART from January 1, 2003, to July 1, 2010, with continuous enrollment for 6 months before (baseline) and 6 months after (evaluation period) ART initiation and 1 or more evaluation period medical claim. Outcomes were incidence and health care costs of the following medically attended (International Classification of Diseases, Ninth Revision, Clinical Modification-coded or treated) adverse effects during the evaluation period: gastrointestinal, lipid abnormalities, diabetes/hyperglycemia, rash, and jaundice. All-cause health care costs were also determined. Patients treated with atazanavir and darunavir were propensity score matched (ratio = 3:1) by using demographic and clinical covariates. Multivariable models adjusted for covariates lacking postmatch statistical balance. Propensity-matched study sample included 1848 atazanavir- and 616 darunavir-treated patients (mean age 41 years, 50% women, 69% black). Multivariable-adjusted hazard ratios (HRs) (for darunavir, reference = atazanavir) and per-patient-per-month health care cost differences (darunavir minus atazanavir) were as follows: gastrointestinal, HR = 1.25 (P = 0.04), $43 (P = 0.13); lipid abnormalities, HR = 1.38 (P = 0.07), $3 (P = 0.88); diabetes/hyperglycemia, HR = 0.84 (P = 0.55), $13 (P = 0.69); and rash, HR = 1.11 (P = 0.23), $0 (P = 0.76); all-cause health care costs were $1086 (P<0.001). Too few instances of jaundice (11 in atazanavir and 1 in darunavir) occurred to support multivariable modeling. Medication tolerability can be critical to the success or failure of ART. Compared with darunavir-treated patients, atazanavir-treated patients had significantly fewer instances of medically attended gastrointestinal issues and more instances of jaundice and incurred significantly lower health care costs. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Predictors of health-related quality of life and costs in adults with epilepsy: a systematic review.
Taylor, Rod S; Sander, Josemir W; Taylor, Rebecca J; Baker, Gus A
2011-12-01
Given the high burden of epilepsy on both health-related quality of life (HRQoL) and costs, identification of factors that are predictive of either reduced HRQoL or increased expenditure is central to the better future targeting and optimization of existing and emerging interventions and management strategies for epilepsy. Searches of Medline, Embase, and Cochrane Library (up to July 2010) to identify studies examining the association between demographic, psychosocial, and condition-related factors and HRQoL, resource utilization or costs in adults with epilepsy. For each study, predictor factor associations were summarized on the basis of statistical significance and direction; the results were then combined across studies. Ninety-three HRQoL and 16 resource utilization/cost studies were included. Increases in seizure frequency, seizure severity, level of depression, and level of anxiety and presence of comorbidity were strongly associated with reduced HRQoL. The majority of studies were cross-sectional in design and had an overall methodologic quality that was judged to be "moderate" for HRQoL studies and "poor" for health care resource or costs studies. In the 53 multivariate studies, age, gender, marital status, type of seizure, age at diagnosis, and duration of epilepsy did not appear to be associated with HRQoL, whereas the predictive influence of educational and employment status, number of antiepileptic drugs (AEDs) and AED side effects was unclear. The association between predictive factors and HRQoL appeared to be consistent across individuals whether refractory or seizures controlled or managed by AEDs. There were insufficient multivariate studies (five) to reliably comment on the predictors of resource utilization or cost in epilepsy. In addition to seizure control, effective epilepsy management requires the early detection of those most at risk of psychological dysfunction and comorbidity, and the targeting of appropriate interventions. There is need for more rigorous studies with appropriate multivariate statistical methods that prospectively investigate the predictors of HRQoL, resource utilization, and costs in epilepsy. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.
Determinants of elevated healthcare utilization in patients with COPD.
Simon-Tuval, Tzahit; Scharf, Steven M; Maimon, Nimrod; Bernhard-Scharf, Barbara J; Reuveni, Haim; Tarasiuk, Ariel
2011-01-13
Chronic obstructive pulmonary disease (COPD) imparts a substantial economic burden on western health systems. Our objective was to analyze the determinants of elevated healthcare utilization among patients with COPD in a single-payer health system. Three-hundred eighty-nine adults with COPD were matched 1:3 to controls by age, gender and area of residency. Total healthcare cost 5 years prior recruitment and presence of comorbidities were obtained from a computerized database. Health related quality of life (HRQoL) indices were obtained using validated questionnaires among a subsample of 177 patients. Healthcare utilization was 3.4-fold higher among COPD patients compared with controls (p < 0.001). The "most-costly" upper 25% of COPD patients (n = 98) consumed 63% of all costs. Multivariate analysis revealed that independent determinants of being in the "most costly" group were (OR; 95% CI): age-adjusted Charlson Comorbidity Index (1.09; 1.01-1.2), history of: myocardial infarct (2.87; 1.5-5.5), congestive heart failure (3.52; 1.9-6.4), mild liver disease (3.83; 1.3-11.2) and diabetes (2.02; 1.1-3.6). Bivariate analysis revealed that cost increased as HRQoL declined and severity of airflow obstruction increased but these were not independent determinants in a multivariate analysis. Comorbidity burden determines elevated utilization for COPD patients. Decision makers should prioritize scarce health care resources to a better care management of the "most costly" patients.
Mohr, Nicholas M; Harland, Karisa K; Shane, Dan M; Ahmed, Azeemuddin; Fuller, Brian M; Torner, James C
2016-12-01
The objective of this study was to evaluate the impact of regionalization on sepsis survival, to describe the role of inter-hospital transfer in rural sepsis care, and to measure the cost of inter-hospital transfer in a predominantly rural state. Observational case-control study using statewide administrative claims data from 2005 to 2014 in a predominantly rural Midwestern state. Mortality and marginal costs were estimated with multivariable generalized estimating equations models and with instrumental variables models. A total of 18 246 patients were included, of which 59% were transferred between hospitals. Transferred patients had higher mortality and longer hospital length-of-stay than non-transferred patients. Using a multivariable generalized estimating equations (GEE) model to adjust for potentially confounding factors, inter-hospital transfer was associated with increased mortality (aOR 1.7, 95% CI 1.5-1.9). Using an instrumental variables model, transfer was associated with a 9.2% increased risk of death. Transfer was associated with additional costs of $6897 (95% CI $5769-8024). Even when limiting to only those patients who received care in the largest hospitals, transfer was still associated with $5167 (95% CI $3696-6638) in additional cost. The majority of rural sepsis patients are transferred, and these transferred patients have higher mortality and significantly increased cost of care. Copyright © 2016 Elsevier Inc. All rights reserved.
Mohr, Nicholas M.; Harland, Karisa K.; Shane, Dan M.; Ahmed, Azeemuddin; Fuller, Brian M.; Torner, James C.
2016-01-01
Purpose The objective of this study was to evaluate the impact of regionalization on sepsis survival, to describe the role of inter-hospital transfer in rural sepsis care, and to measure the cost of inter-hospital transfer in a predominantly rural state. Materials and Methods Observational case-control study using statewide administrative claims data from 2005-2014 in a predominantly rural Midwestern state. Mortality and marginal costs were estimated with multivariable generalized estimating equations (GEE) models and with instrumental variables models. Results A total of 18,246 patients were included, of which 59% were transferred between hospitals. Transferred patients had higher mortality and longer hospital length-of-stay than non-transferred patients. Using a multivariable GEE model to adjust for potentially confounding factors, inter-hospital transfer was associated with increased mortality (aOR 1.7, 95%CI 1.5 – 1.9). Using an instrumental variables model, transfer was associated with a 9.2% increased risk of death. Transfer was associated with additional costs of $6,897 (95%CI $5,769-8,024). Even when limiting to only those patients who received care in the largest hospitals, transfer was still associated with $5,167 (95%CI $3,696-6,638) in additional cost. Conclusions The majority of rural sepsis patients are transferred, and these transferred patients have higher mortality and significantly increased cost of care. PMID:27546770
Risk-Assessment Score and Patient Optimization as Cost Predictors for Ventral Hernia Repair.
Saleh, Sherif; Plymale, Margaret A; Davenport, Daniel L; Roth, John Scott
2018-04-01
Ventral hernia repair (VHR) is associated with complications that significantly increase healthcare costs. This study explores the associations between hospital costs for VHR and surgical complication risk-assessment scores, need for cardiac or pulmonary evaluation, and smoking or obesity counseling. An IRB-approved retrospective study of patients having undergone open VHR over 3 years was performed. Ventral Hernia Risk Score (VHRS) for surgical site occurrence and surgical site infection, and the Ventral Hernia Working Group grade were calculated for each case. Also recorded were preoperative cardiology or pulmonary evaluations, smoking cessation and weight reduction counseling, and patient goal achievement. Hospital costs were obtained from the cost accounting system for the VHR hospitalization stratified by major clinical cost drivers. Univariate regression analyses were used to compare the predictive power of the risk scores. Multivariable analysis was performed to develop a cost prediction model. The mean cost of index VHR hospitalization was $20,700. Total and operating room costs correlated with increasing CDC wound class, VHRS surgical site infection score, VHRS surgical site occurrence score, American Society of Anesthesiologists class, and Ventral Hernia Working Group (all p < 0.01). The VHRS surgical site infection scores correlated negatively with contribution margin (-280; p < 0.01). Multivariable predictors of total hospital costs for the index hospitalization included wound class, hernia defect size, age, American Society of Anesthesiologists class 3 or 4, use of biologic mesh, and 2+ mesh pieces; explaining 73% of the variance in costs (p < 0.001). Weight optimization significantly reduced direct and operating room costs (p < 0.05). Cardiac evaluation was associated with increased costs. Ventral hernia repair hospital costs are more accurately predicted by CDC wound class than VHR risk scores. A straightforward 6-factor model predicted most cost variation for VHR. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Subak, Leslee L; Goode, Patricia S; Brubaker, Linda; Kusek, John W; Schembri, Michael; Lukacz, Emily S; Kraus, Stephen R; Chai, Toby C; Norton, Peggy; Tennstedt, Sharon L
2014-08-01
The objective of the study was to estimate the effect of Burch and fascial sling surgery on out-of-pocket urinary incontinence (UI) management costs at 24 months postoperatively and identify predictors of change in cost among women enrolled in a randomized trial comparing these procedures. Resources used for UI management (supplies, laundry, dry cleaning) were self-reported by 491 women at baseline and 24 months after surgery, and total out-of-pocket costs for UI management (in 2012 US dollars) were estimated. Data from the 2 surgical groups were combined to examine the change in cost for UI management over 24 months. Univariate and bivariate changes in cost were analyzed using the Wilcoxon signed rank test. Predictors of change in cost were examined using multivariate mixed models. At baseline mean (±SD) age of participants was 53 ± 10 years, and the frequency of weekly UI episodes was 23 ± 21. Weekly UI episodes decreased by 86% at 24 months (P < .001). The mean weekly cost was $16.60 ± $27.00 (median $9.39) at baseline and $4.57 ± $15.00 (median $0.10) at 24 months (P < .001), a decrease of 72%. In multivariate analyses, cost decreased by $3.38 ± $0.77 per week for each decrease of 1 UI episode per day (P < .001) and was strongly associated with greater improvement in Urogenital Distress Inventory and Incontinence Impact Questionnaire scores (P < .001) and decreased 24-hour pad weight (P < .02). Following Burch or fascial sling surgery, the UI management cost at 24 months decreased by 72% ($625 per woman per year) and was strongly associated with decreasing UI frequency. Reduced out-of-pocket expenses may be a benefit of these established urinary incontinence procedures. Copyright © 2014. Published by Mosby, Inc.
Analysis of National Rates, Cost, and Sources of Cost Variation in Adult Spinal Deformity.
Zygourakis, Corinna C; Liu, Caterina Y; Keefe, Malla; Moriates, Christopher; Ratliff, John; Dudley, R Adams; Gonzales, Ralph; Mummaneni, Praveen V; Ames, Christopher P
2018-03-01
Several studies suggest significant variation in cost for spine surgery, but there has been little research in this area for spinal deformity. To determine the utilization, cost, and factors contributing to cost for spinal deformity surgery. The cohort comprised 55 599 adults who underwent spinal deformity fusion in the 2001 to 2013 National Inpatient Sample database. Patient variables included age, gender, insurance, median income of zip code, county population, severity of illness, mortality risk, number of comorbidities, length of stay, elective vs nonelective case. Hospital variables included bed size, wage index, hospital type (rural, urban nonteaching, urban teaching), and geographical region. The outcome was total hospital cost for deformity surgery. Statistics included univariate and multivariate regression analyses. The number of spinal deformity cases increased from 1803 in 2001 (rate: 4.16 per 100 000 adults) to 6728 in 2013 (rate: 13.9 per 100 000). Utilization of interbody fusion devices increased steadily during this time period, while bone morphogenic protein usage peaked in 2010 and declined thereafter. The mean inflation-adjusted case cost rose from $32 671 to $43 433 over the same time period. Multivariate analyses showed the following patient factors were associated with cost: age, race, insurance, severity of illness, length of stay, and elective admission (P < .01). Hospitals in the western United States and those with higher wage indices or smaller bed sizes were significantly more expensive (P < .05). The rate of adult spinal deformity surgery and the mean case cost increased from 2001 to 2013, exceeding the rate of inflation. Both patient and hospital factors are important contributors to cost variation for spinal deformity surgery. Copyright © 2017 by the Congress of Neurological Surgeons
Mohanty, Sanjay K; Srivastava, Akanksha
2013-10-01
Large scale investment in the National Rural Health Mission is expected to increase the utilization and reduce the cost of maternal care in public health centres in India. The objective of this paper is to examine recent trends in the utilization and cost of hospital based delivery care in the Empowered Action Group (EAG) states of India. The unit data from the District Level Household Survey 3, 2007-2008 is used in the analyses. The coverage and the cost of hospital based delivery at constant price is analyzed for five consecutive years preceding the survey. Descriptive and multivariate analyses are used to understand the socio-economic differentials in cost and utilization of delivery care. During 2004-2008, the utilization of delivery care from public health centres has increased in all the eight EAG states. Adjusting for inflation, the household cost of delivery care has declined for the poor, less educated and in public health centres in the EAG states. The cost of delivery care in private health centres has not shown any significant changes across the states. Results of the multivariate analyses suggest that time, state, place of residence, economic status; educational attainment and delivery characteristics of mother are significant predictors of hospital based delivery care in India. The study demonstrates the utility of public spending on health care and provides a thrust to the ongoing debate on universal health coverage in India.
Regional cost and experience, not size or hospital inclusion, helps predict ACO success.
Schulz, John; DeCamp, Matthew; Berkowitz, Scott A
2017-06-01
The Medicare Shared Savings Program (MSSP) continues to expand and now includes 434 accountable care organizations (ACOs) serving more than 7 million beneficiaries. During 2014, 86 of these ACOs earned over $300 million in shared savings payments by promoting higher-quality patient care at a lower cost.Whether organizational characteristics, regional cost of care, or experience in the MSSP are associated with the ability to achieve shared savings remains uncertain.Using financial results from 2013 and 2014, we examined all 339 MSSP ACOs with a 2012, 2013, or 2014 start-date. We used a cross-sectional analysis to examine all ACOs and used a multivariate logistic model to predict probability of achieving shared savings.Experience, as measured by years in the MSSP program, was associated with success and the ability to earn shared savings varied regionally. This variation was strongly associated with differences in regional Medicare fee-for-service per capita costs: ACOs in high cost regions were more likely to earn savings. In the multivariate model, the number of ACO beneficiaries, inclusion of a hospital or involvement of an academic medical center, was not associated with likelihood of earning shared savings, after accounting for regional baseline cost variation.These results suggest ACOs are learning and improving from their experience. Additionally, the results highlight regional differences in ACO success and the strong association with variation in regional per capita costs, which can inform CMS policy to help promote ACO success nationwide.
Walter, Dawn; Tousimis, Eleni; Hayes, Mary Katherine
2018-01-01
A new breast cancer treatment, brachytherapy-based accelerated partial breast radiotherapy (RT), was adopted before long-term effectiveness evidence, potentially increasing morbidity and costs compared with whole breast RT. The aim of this study was to estimate complication rates and RT-specific and 1-year costs for a cohort of female Medicare beneficiaries diagnosed with breast cancer (N = 47 969). We analyzed 2005-2007 Medicare claims using multivariable logistic regression for complications and generalized linear models (log link, gamma distribution) for costs. Overall, 11% (n = 5296) underwent brachytherapy-based RT; 9.4% had complications. Odds of any complication were higher (odds ratio [OR]: 1.62; 95% confidence interval [CI]: 1.49-1.76) for brachytherapy versus whole breast RT, similarly to seroma (OR: 2.85; 95% CI: 1.97-4.13), wound complication/infection (OR: 1.72; 95% CI: 1.52-1.95), cellulitis (OR: 1.48; 95% CI: 1.27-1.73), and necrosis (OR: 2.07; 95% CI: 1.55-2.75). Mean RT-specific and 1-year total costs for whole breast RT were $6375, and $19 917, $4886, and $4803 lower than brachytherapy (P < .0001). Multivariable analyses indicated brachytherapy yielded 76% higher RT costs (risk ratio: 1.76; 95% CI: 1.74-1.78, P < .0001) compared with whole breast RT. Brachytherapy had higher complications and costs before long-term evidence proved its effectiveness. Policies should require treatment registries with reimbursement incentives to capture surveillance data for new technologies. PMID:29502466
Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A
2017-09-30
For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
The microwave rectifier technology, approaches to the receiving antenna, topology of rectenna circuits, assembly and construction, ROM cost estimates are discussed. Analyses and cost estimates for the equipment required to transmit the ground power to an external user. Noise and harmonic considerations are presented for both the amplitron and klystron and interference limits are identified and evaluated. The risk assessment discussion is discussed wherein technology risks are rated and ranked with regard to their importance in impacting the microwave power transmission system. The system analyses and evaluation are included of parametric studies of system relationships pertaining to geometry, materials, specific cost, specific weight, efficiency, converter packing, frequency selection, power distribution, power density, power output magnitude, power source, transportation and assembly. Capital costs per kW and energy costs as a function of rate of return, power source and transportation costs as well as build cycle time are presented. The critical technology and ground test program are discussed along with ROM costs and schedule. The orbital test program with associated critical technology and ground based program based on full implementation of the defined objectives is discussed.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Brennan, Aline; Jackson, Arthur; Horgan, Mary; Bergin, Colm J; Browne, John P
2015-04-03
It is anticipated that demands on ambulatory HIV services will increase in coming years as a consequence of the increased life expectancy of HIV patients on highly active anti-retroviral therapy (HAART). Accurate cost data are needed to enable evidence based policy decisions be made about new models of service delivery, new technologies and new medications. A micro-costing study was carried out in an HIV outpatient clinic in a single regional centre in the south of Ireland. The costs of individual appointment types were estimated based on staff grade and time. Hospital resources used by HIV patients who attended the ambulatory care service in 2012 were identified and extracted from existing hospital systems. Associations between patient characteristics and costs per patient month, in 2012 euros, were examined using univariate and multivariate analyses. The average cost of providing ambulatory HIV care was found to be €973 (95% confidence interval €938-€1008) per patient month in 2012. Sensitivity analysis, varying the base-case staff time estimates by 20% and diagnostic testing costs by 60%, estimated the average cost to vary from a low of €927 per patient month to a high of €1019 per patient month. The vast majority of costs were due to the cost of HAART. Women were found to have significantly higher HAART costs per patient month while patients over 50 years of age had significantly lower HAART costs using multivariate analysis. This study provides the estimated cost of ambulatory care in a regional HIV centre in Ireland. These data are valuable for planning services at a local level, and the identification of patient factors, such as age and gender, associated with resource use is of interest both nationally and internationally for the long-term planning of HIV care provision.
Khorgami, Zhamak; Aminian, Ali; Shoar, Saeed; Andalib, Amin; Saber, Alan A; Schauer, Philip R; Brethauer, Stacy A; Sclabas, Guido M
2017-08-01
In the current healthcare environment, bariatric surgery centers need to be cost-effective while maintaining quality. The aim of this study was to evaluate national cost of bariatric surgery to identify the factors associated with a higher cost. A retrospective analysis of 2012-2013 Healthcare Cost and Utilization Project - Nationwide Inpatient Sample (HCUP-NIS). We included all patients with a diagnosis of morbid obesity (ICD9 278.01) and a Diagnosis Related Group code related to procedures for obesity, who underwent Roux-en-Y gastric bypass (RYGB), sleeve gastrectomy (SG), or adjustable gastric banding (AGB) as their primary procedure. We converted "hospital charges" to "cost," using hospital specific cost-to-charge ratio. Inflation was adjusted using the annual consumer price index. Increased cost was defined as the top 20th percentile of the expenditure and its associated factors were analyzed using the logistic regression multivariate analysis. A total of 45,219 patients (20,966 RYGBs, 22,380 SGs, and 1,873 AGBs) were included. The median (interquartile range) calculated costs for RYGB, SG, and AGB were $12,543 ($9,970-$15,857), $10,531 ($8,248-$13,527), and $9,219 ($7,545-$12,106), respectively (P<.001). Robotic-assisted procedures had the highest impact on the cost (odds ratio 3.6, 95% confidence interval 3.2-4). Hospital cost of RYGB and SG increased linearly with the length of hospital stay and almost doubled after 7 days. Furthermore, multivariate analysis showed that certain co-morbidities and concurrent procedures were associated with an increased cost. Factors contributing to the cost variation of bariatric procedures include co-morbidities, robotic platform, complexity of surgery, and hospital length of stay. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-01-01
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914
Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J
2003-11-01
The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Joish, Vijay N; Spilsbury-Cantalupo, Monica; Operschall, Elisabeth; Luong, Ba; Boklage, Susan
2013-06-01
Recent estimates suggest the prevalence of non-cystic fibrosis bronchiectasis (NCFB) may be increasing in the US. The objective of this study was to determine the current economic burden of NCFB compared with non-NCFB controls in the first year after diagnosis within a commercially enrolled US population. A retrospective matched cross-sectional case control (1:3) study design was used. Data were derived from MarketScan(®) Commercial Claims and Encounters Database, which captures all patient-level demographic data and all medical and pharmacy claims during the period 1 January 2005 to 31 December 2009. NCFB patients were identified using ICD-9 codes 494.0 and 494.1. Individuals with medical claims for cystic fibrosis or chronic obstructive pulmonary disease were excluded. Incremental burden of NCFB was estimated for overall and respiratory-related expenditures using multivariate regression models which adjusted for baseline characteristics and healthcare resource utilization. All demographic characteristics and economic outcomes were ascertained in 12 months before (baseline period) and 12 months after (follow-up) index event, which was defined as the first bronchiectasis-related medical event. Non-parametric bootstrap technique was used to calculate the 95 % confidence limits for the adjusted estimate. All costs are inflation-adjusted to a baseline year of 2009 using the consumer price index. All statistical tests were conducted using SAS 9.2 and STATA 12.0. The study sample used for healthcare burden analyses had 9,146 cases and 27,438 matched controls. The majority of the sample was between the ages of 45-64 years old and 64 % were female. A greater proportion of cases than controls had an increase from baseline to follow-up in both total (49 vs 40 %) and respiratory-related costs (57 vs 25 %). The average increase in overall and respiratory-related costs compared with controls after adjusting for differences in baseline characteristics was US$2,319 (95 % CI 1,872-2,765) and US$1,607 (95 % CI 1,406-1,809), respectively. The primary driver for this increment was increase in outpatient visits of approximately 2 overall and 1.6 respiratory-related visits per patient per year, which translated to US$1,730 (95 % CI 1,332-2,127) and US$1,253 (95 % CI 1,097-1,408), respectively. This study found that the cost of managing NCFB in the first year within a commercially enrolled population may be burdensome. Compared with previously published estimates in the literature, the burden of NCFB may be also increasing.
COMPASS Final Report: Low Cost Robotic Lunar Lander
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Oleson, Steven R.
2010-01-01
The COllaborative Modeling for the Parametric Assessment of Space Systems (COMPASS) team designed a robotic lunar Lander to deliver an unspecified payload (greater than zero) to the lunar surface for the lowest cost in this 2006 design study. The purpose of the low cost lunar lander design was to investigate how much payload can an inexpensive chemical or Electric Propulsion (EP) system deliver to the Moon s surface. The spacecraft designed as the baseline out of this study was a solar powered robotic lander, launched on a Minotaur V launch vehicle on a direct injection trajectory to the lunar surface. A Star 27 solid rocket motor does lunar capture and performs 88 percent of the descent burn. The Robotic Lunar Lander soft-lands using a hydrazine propulsion system to perform the last 10% of the landing maneuver, leaving the descent at a near zero, but not exactly zero, terminal velocity. This low-cost robotic lander delivers 10 kg of science payload instruments to the lunar surface.
Parametric sensitivity study for solar-assisted heat-pump systems
NASA Astrophysics Data System (ADS)
White, N. M.; Morehouse, J. H.
1981-07-01
The engineering and economic parameters affecting life-cycle costs for solar-assisted heat pump systems are investigted. The change in energy usage resulting from each engineering parameter varied was developed from computer simulations, and is compared with results from a stand-alone heat pump system. Three geographical locations are considered: Washington, DC, Fort Worth, TX, and Madison, WI. Results indicate that most engineering changes to the systems studied do not provide significant energy savings. The most promising parameters to ary are the solar collector parameters tau (-) and U/sub L/ the heat pump capacity at design point, and the minimum utilizable evaporator temperature. Costs associated with each change are estimated, and life-cycle costs computed for both engineering parameters and economic variations in interest rate, discount rate, tax credits, fuel unit costs and fuel inflation rates. Results indicate that none of the feasibile engineering changes for the system configuration studied will make these systems economically competitive with the stand-alone heat pump without a considerable tax credit.
Storage requirement definition study
NASA Technical Reports Server (NTRS)
Stacy, L. E.; Wesling, G. C.; Zimmerman, W. F.
1980-01-01
A dish Stirling solar receiver (DSSR) and a heat pipe solar receiver with TES (HPSR) for a 25 kWe dish Stirling solar power system are described. The thermal performance and cost effectiveness of each are analyzed minute by minute over the equivalent of one year of solar insolation. Existing designs of these two systems were used as a basis for the study; TES concepts for the DSSR and alternative TES concepts for the HPSR are presented. Parametric performance and cost studies were performed to determine the operating and cost characteristics of these systems. Data are reported for systems (1) without TES and with varying amounts of TES, (2) with and without a fossil fuel combustor, (3) with varying solar to fossil power input, and (4) with different system control assumptions. The principal effects of TES duration, collector area, engine efficiency, and fuel cost sensitivity are indicated. Development needs for each of the systems are discussed and the need and nature of possible future TES solar modular experiments are presented and discussed.
Velasco, Cesar; Pérez, Inaki; Podzamczer, Daniel; Llibre, Josep Maria; Domingo, Pere; González-García, Juan; Puig, Inma; Ayala, Pilar; Martín, Mayte; Trilla, Antoni; Lázaro, Pablo; Gatell, Josep Maria
2016-03-01
The financing of antiretroviral therapy (ART) is generally determined by the cost incurred in the previous year, the number of patients on treatment, and the evidence-based recommendations, but not the clinical characteristics of the population. To establish a score relating the cost of ART and patient clinical complexity in order to understand the costing differences between hospitals in the region that could be explained by the clinical complexity of their population. Retrospective analysis of patients receiving ART in a tertiary hospital between 2009 and 2011. Factors potentially associated with a higher cost of ART were assessed by bivariate and multivariate analysis. Two predictive models of "high-cost" were developed. The normalized estimated (adjusted for the complexity scores) costs were calculated and compared with the normalized real costs. In the Hospital Index, 631 (16.8%) of the 3758 patients receiving ART were responsible for a "high-cost" subgroup, defined as the highest 25% of spending on ART. Baseline variables that were significant predictors of high cost in the Clinic-B model in the multivariate analysis were: route of transmission of HIV, AIDS criteria, Spanish nationality, year of initiation of ART, CD4+ lymphocyte count nadir, and number of hospital admissions. The Clinic-B score ranged from 0 to 13, and the mean value (5.97) was lower than the overall mean value of the four hospitals (6.16). The clinical complexity of the HIV patient influences the cost of ART. The Clinic-B and Clinic-BF scores predicted patients with high cost of ART and could be used to compare and allocate costs corrected for the patient clinical complexity. Copyright © 2015 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Chiong, Jun R; Kim, Sonnie; Lin, Jay; Christian, Rudell; Dasta, Joseph F
2012-01-01
The Efficacy of Vasopressin Antagonism in Heart Failure Outcome Study with Tolvaptan (EVEREST) trial showed that tolvaptan use improved heart failure (HF) signs and symptoms without serious adverse events. To evaluate the potential cost savings associated with tolvaptan usage among hospitalized hyponatremic HF patients. The Healthcare Cost and Utilization Project (HCUP) 2008 Nationwide Inpatient Sample (NIS) database was used to estimate hospital cost and length of stay (LOS), for diagnosis-related group (DRG) hospitalizations of adult (age ≥18 years) HF patients with complications and comorbidities or major complications and comorbidities. EVEREST trial data for patients with hyponatremia were used to estimate tolvaptan-associated LOS reductions. A cost offset model was constructed to evaluate the impact of tolvaptan on hospital cost and LOS, with univariate and multivariate Monte Carlo sensitivity analyses. Tolvaptan use among hyponatremic EVEREST trial HF patients was associated with shorter hospital LOS than placebo patients (9.72 vs 11.44 days, respectively); 688,336 hospitalizations for HF DRGs were identified from the HCUP NIS database, with a mean LOS of 5.4 days and mean total hospital costs of $8415. Using an inpatient tolvaptan treatment duration of 4 days with a wholesale acquisition cost of $250 per day, the cost offset model estimated a LOS reduction among HF hospitalizations of 0.81 days and an estimated total cost saving of $265 per admission. Univariate and multivariate sensitivity analysis demonstrated that cost reduction associated with tolvaptan usage is consistent among variations of model variables. The estimated LOS reduction and cost savings projected by the cost offset model suggest a clinical and economic benefit to tolvaptan use in hyponatremic HF patients. The EVEREST trial data may not generalize well to the US population. Clinical trial patient profiles and relative LOS reductions may not be applicable to real-world patient populations.
Impact of robotic technique and surgical volume on the cost of radical prostatectomy.
Hyams, Elias S; Mullins, Jeffrey K; Pierorazio, Phillip M; Partin, Alan W; Allaf, Mohamad E; Matlaga, Brian R
2013-03-01
Our present understanding of the effect of robotic surgery and surgical volume on the cost of radical prostatectomy (RP) is limited. Given the increasing pressures placed on healthcare resource utilization, such determinations of healthcare value are becoming increasingly important. Therefore, we performed a study to define the effect of robotic technology and surgical volume on the cost of RP. The state of Maryland mandates that all acute-care hospitals report encounter-level and hospital discharge data to the Health Service Cost Review Commission (HSCRC). The HSCRC was queried for men undergoing RP between 2008 and 2011 (the period during which robot-assisted laparoscopic radical prostatectomy [RALRP] was coded separately). High-volume hospitals were defined as >60 cases per year, and high-volume surgeons were defined as >40 cases per year. Multivariate regression analysis was performed to evaluate whether robotic technique and high surgical volume impacted the cost of RP. There were 1499 patients who underwent RALRP and 2565 who underwent radical retropubic prostatectomy (RRP) during the study period. The total cost for RALRP was higher than for RRP ($14,000 vs 10,100; P<0.001) based primarily on operating room charges and supply charges. Multivariate regression demonstrated that RALRP was associated with a significantly higher cost (β coeff 4.1; P<0.001), even within high-volume hospitals (β coeff 3.3; P<0.001). High-volume surgeons and high-volume hospitals, however, were associated with a significantly lower cost for RP overall. High surgeon volume was associated with lower cost for RALRP and RRP, while high institutional volume was associated with lower cost for RALRP only. High surgical volume was associated with lower cost of RP. Even at high surgical volume, however, the cost of RALRP still exceeded that of RRP. As robotic surgery has come to dominate the healthcare marketplace, strategies to increase the role of high-volume providers may be needed to improve the cost-effectiveness of prostate cancer surgical therapy.
Nnane, Daniel Ekane
2011-11-15
Contamination of surface waters is a pervasive threat to human health, hence, the need to better understand the sources and spatio-temporal variations of contaminants within river catchments. River catchment managers are required to sustainably monitor and manage the quality of surface waters. Catchment managers therefore need cost-effective low-cost long-term sustainable water quality monitoring and management designs to proactively protect public health and aquatic ecosystems. Multivariate and phage-lysis techniques were used to investigate spatio-temporal variations of water quality, main polluting chemophysical and microbial parameters, faecal micro-organisms sources, and to establish 'sentry' sampling sites in the Ouse River catchment, southeast England, UK. 350 river water samples were analysed for fourteen chemophysical and microbial water quality parameters in conjunction with the novel human-specific phages of Bacteroides GB-124 (Bacteroides GB-124). Annual, autumn, spring, summer, and winter principal components (PCs) explained approximately 54%, 75%, 62%, 48%, and 60%, respectively, of the total variance present in the datasets. Significant loadings of Escherichia coli, intestinal enterococci, turbidity, and human-specific Bacteroides GB-124 were observed in all datasets. Cluster analysis successfully grouped sampling sites into five clusters. Importantly, multivariate and phage-lysis techniques were useful in determining the sources and spatial extent of water contamination in the catchment. Though human faecal contamination was significant during dry periods, the main source of contamination was non-human. Bacteroides GB-124 could potentially be used for catchment routine microbial water quality monitoring. For a cost-effective low-cost long-term sustainable water quality monitoring design, E. coli or intestinal enterococci, turbidity, and Bacteroides GB-124 should be monitored all-year round in this river catchment. Copyright © 2011 Elsevier B.V. All rights reserved.
Johnson, Susan L; Tabaei, Bahman P; Herman, William H
2005-02-01
To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.
Longitudinal costs of caring for people with Alzheimer's disease.
Gillespie, Paddy; O'Shea, Eamon; Cullinan, John; Buchanan, Jacqui; Bobula, Joel; Lacey, Loretto; Gallagher, Damien; Mhaolain, Aine Ni; Lawlor, Brian
2015-05-01
There has been an increasing interest in the relationship between severity of disease and costs in the care of people with dementia. Much of the current evidence is based on cross-sectional data, suggesting the need to examine trends over time for this important and growing cohort of the population. This paper estimates resource use and costs of care based on longitudinal data for 72 people with dementia in Ireland. Data were collected from the Enhancing Care in Alzheimer's Disease (ECAD) study at two time points: baseline and follow-up, two years later. Patients' dependence on others was measured using the Dependence Scale (DS), while patient function was measured using the Disability Assessment for Dementia (DAD) scale. Univariate and multivariate analysis were used to explore the effects of a range of variables on formal and informal care costs. Total costs of formal and informal care over six months rose from €9,266 (Standard Deviation (SD): 12,947) per patient at baseline to €21,266 (SD: 26,883) at follow-up, two years later. This constituted a statistically significant (p = 0.0014) increase in costs over time, driven primarily by an increase in estimated informal care costs. In the multivariate analysis, a one-point increase in the DS score, that is a one-unit increase in patient's dependence on others, was associated with a 19% increase in total costs (p = 0.0610). Higher levels of dependence in people with Alzheimer's disease are significantly associated with increased costs of informal care as the disease progresses. Formal care services did not respond to increased dependence in people with dementia, leaving it to families to fill the caring gap, mainly through increased supervision with the progress of disease.
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.
2014-01-01
As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.
NASA Technical Reports Server (NTRS)
McCurry, J. B.
1995-01-01
The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.
Antenna concepts for interstellar search systems
NASA Technical Reports Server (NTRS)
Basler, R. P.; Johnson, G. L.; Vondrak, R. R.
1977-01-01
An evaluation is made of microwave receiving systems designed to search for signals from extraterrestrial intelligence. Specific design concepts are analyzed parametrically to determine whether the optimum antenna system location is on earth, in space, or on the moon. Parameters considered include the hypothesized number of transmitting civilizations, the number of stars that must be searched to give any desired probability of receiving a signal, the antenna collecting area, the search time, the search range, and the cost. This analysis suggests that (1) search systems based on the moon are not cost-competitive, (2) if the search is extended only a few hundred light years from the earth, a Cyclops-type array on earth may be the most cost-effective system, (3) for a search extending to 500 light years or more, a substantial cost and search-time advantage can be achieved with a large spherical reflector in space with multiple feeds, (4) radio frequency interference shields can be provided for space systems, and (5) cost can range from a few hundred million to tens of billions of dollars, depending on the parameter values assumed.
The cost of colorectal cancer according to the TNM stage.
Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu
2017-02-01
The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
DePrince, A. Eugene; Mazziotti, David A.
2010-01-01
The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.
Parametric study of transport aircraft systems cost and weight
NASA Technical Reports Server (NTRS)
Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.
1977-01-01
The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L
As wind turbine blade diameters and tower height increase to capture more energy in the wind, higher structural loads results in more structural support material increasing the cost of scaling. Weight reductions in the generator transfer to overall cost savings of the system. Additive manufacturing facilitates a design-for-functionality approach, thereby removing traditional manufacturing constraints and labor costs. The most feasible additive manufacturing technology identified for large, direct-drive generators in this study is powder-binder jetting of a sand cast mold. A parametric finite element analysis optimization study is performed, optimizing for mass and deformation. Also, topology optimization is employed for eachmore » parameter-optimized design.The optimized U-beam spoked web design results in a 24 percent reduction in structural mass of the rotor and 60 percent reduction in radial deflection.« less
Functional Groups Based on Leaf Physiology: Are they Spatially and Temporally Robust?
NASA Technical Reports Server (NTRS)
Foster, Tammy E.; Brooks, J. Renee
2004-01-01
The functional grouping hypothesis, which suggests that complexity in ecosystem function can be simplified by grouping species with similar responses, was tested in the Florida scrub habitat. Functional groups were identified based on how species in fire maintained Florida scrub regulate exchange of carbon and water with the atmosphere as indicated by both instantaneous gas exchange measurements and integrated measures of function (%N, delta C-13, delta N-15, C-N ratio). Using cluster analysis, five distinct physiologically-based functional groups were identified in the fire maintained scrub. These functional groups were tested to determine if they were robust spatially, temporally, and with management regime. Analysis of Similarities (ANOSIM), a non-parametric multivariate analysis, indicated that these five physiologically-based groupings were not altered by plot differences (R = -0.115, p = 0.893) or by the three different management regimes; prescribed burn, mechanically treated and burn, and fire-suppressed (R = 0.018, p = 0.349). The physiological groupings also remained robust between the two climatically different years 1999 and 2000 (R = -0.027, p = 0.725). Easy-to-measure morphological characteristics indicating functional groups would be more practical for scaling and modeling ecosystem processes than detailed gas-exchange measurements, therefore we tested a variety of morphological characteristics as functional indicators. A combination of non-parametric multivariate techniques (Hierarchical cluster analysis, non-metric Multi-Dimensional Scaling, and ANOSIM) were used to compare the ability of life form, leaf thickness, and specific leaf area classifications to identify the physiologically-based functional groups. Life form classifications (ANOSIM; R = 0.629, p 0.001) were able to depict the physiological groupings more adequately than either specific leaf area (ANOSIM; R = 0.426, p = 0.001) or leaf thickness (ANOSIM; R 0.344, p 0.001). The ability of life forms to depict the physiological groupings was improved by separating the parasitic Ximenia americana from the shrub category (ANOSIM; R = 0.794, p = 0.001). Therefore, a life form classification including parasites was determined to be a good indicator of the physiological processes of scrub species, and would be a useful method of grouping for scaling physiological processes to the ecosystem level.
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
Parametric Loop Division for 3D Localization in Wireless Sensor Networks
Ahmad, Tanveer
2017-01-01
Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714
Parametric Study of HTS Coil Quench Protection Strategies
NASA Astrophysics Data System (ADS)
Seibert, Joseph; Zarnstorff, Michael; Zhai, Yuhu
2016-10-01
Next generation fusion devices require high magnetic fields to adequately contain burning plasmas. Use of high temperature superconducting (HTS) coils to generate these magnetic fields would lower energy cost of operation as well as increase stability of the superconducting state compared to low temperature superconducting coils. However, use of HTS coils requires developing quench protection strategies to prevent damage to the coils. One technique involves the utilization of copper discs and other conductors mutually coupled to the HTS coil to quickly extract the current from the coil. Another technique allows conduction between HTS turns to reduce the current in the coil during quench. This project describes a parametric study of the HTS coil and resistive-conductor setup in order to determine limiting cases of the geometry in an attempt to optimize current extraction and coil protection during quench scenarios. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internship (SULI) program.
A label-free immunoassay for Flavivirus detection by the Reflective Phantom Interface technology.
Tagliabue, Giovanni; Faoro, Valentina; Rizzo, Serena; Sblattero, Daniele; Saccani, Andrea; Riccio, Gabriele; Bellini, Tommaso; Salina, Matteo; Buscaglia, Marco; Marcello, Alessandro
2017-10-28
Flaviviruses are widespread and cause clinically relevant arboviral diseases that impact locally and as imported travel-related infections. Direct detection of viraemia is limited, being typically undetectable at onset of symptoms. Therefore, diagnosis is primarily based on serology, which is complicated by high cross-reactivity across different species. The overlapping geographical distribution of the vectors in areas with a weak healthcare system, the increase of international travel and the similarity of symptoms highlight the need for rapid and reliable multi-parametric diagnostic tests in point-of-care formats. To this end we developed a bi-parametric serological microarray using recombinant NS1 proteins from Tick-borne encephalitis virus and West Nile virus coupled to a low-cost, label-free detection device based on the Reflective Phantom Interface (RPI) principle. Specific sequential detection of antibodies in solution demonstrates the feasibility of the approach for the surveillance and diagnosis of Flaviviruses. Copyright © 2017 Elsevier Inc. All rights reserved.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Willis, Michael; Asseburg, Christian; Nilsson, Andreas; Johnsson, Kristina; Kartman, Bernt
2017-03-01
Type 2 diabetes mellitus (T2DM) is chronic and progressive and the cost-effectiveness of new treatment interventions must be established over long time horizons. Given the limited durability of drugs, assumptions regarding downstream rescue medication can drive results. Especially for insulin, for which treatment effects and adverse events are known to depend on patient characteristics, this can be problematic for health economic evaluation involving modeling. To estimate parsimonious multivariate equations of treatment effects and hypoglycemic event risks for use in parameterizing insulin rescue therapy in model-based cost-effectiveness analysis. Clinical evidence for insulin use in T2DM was identified in PubMed and from published reviews and meta-analyses. Study and patient characteristics and treatment effects and adverse event rates were extracted and the data used to estimate parsimonious treatment effect and hypoglycemic event risk equations using multivariate regression analysis. Data from 91 studies featuring 171 usable study arms were identified, mostly for premix and basal insulin types. Multivariate prediction equations for glycated hemoglobin A 1c lowering and weight change were estimated separately for insulin-naive and insulin-experienced patients. Goodness of fit (R 2 ) for both outcomes were generally good, ranging from 0.44 to 0.84. Multivariate prediction equations for symptomatic, nocturnal, and severe hypoglycemic events were also estimated, though considerable heterogeneity in definitions limits their usefulness. Parsimonious and robust multivariate prediction equations were estimated for glycated hemoglobin A 1c and weight change, separately for insulin-naive and insulin-experienced patients. Using these in economic simulation modeling in T2DM can improve realism and flexibility in modeling insulin rescue medication. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross
2016-06-15
Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less
Economic analysis and assessment of syngas production using a modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hakkwan; Parajuli, Prem B.; Yu, Fei
Economic analysis and modeling are essential and important issues for the development of current feedstock and process technology for bio-gasification. The objective of this study was to develop an economic model and apply to predict the unit cost of syngas production from a micro-scale bio-gasification facility. An economic model was programmed in C++ computer programming language and developed using a parametric cost approach, which included processes to calculate the total capital costs and the total operating costs. The model used measured economic data from the bio-gasification facility at Mississippi State University. The modeling results showed that the unit cost ofmore » syngas production was $1.217 for a 60 Nm-3 h-1 capacity bio-gasifier. The operating cost was the major part of the total production cost. The equipment purchase cost and the labor cost were the largest part of the total capital cost and the total operating cost, respectively. Sensitivity analysis indicated that labor costs rank the top as followed by equipment cost, loan life, feedstock cost, interest rate, utility cost, and waste treatment cost. The unit cost of syngas production increased with the increase of all parameters with exception of loan life. The annual cost regarding equipment, labor, feedstock, waste treatment, and utility cost showed a linear relationship with percent changes, while loan life and annual interest rate showed a non-linear relationship. This study provides the useful information for economic analysis and assessment of the syngas production using a modeling approach.« less
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
ERIC Educational Resources Information Center
Sung, Kyung Hee; Noh, Eun Hee; Chon, Kyong Hee
2017-01-01
With increased use of constructed response items in large scale assessments, the cost of scoring has been a major consideration (Noh et al. in KICE Report RRE 2012-6, 2012; Wainer and Thissen in "Applied Measurement in Education" 6:103-118, 1993). In response to the scoring cost issues, various forms of automated system for scoring…
Optimized and Automated design of Plasma Diagnostics for Additive Manufacture
NASA Astrophysics Data System (ADS)
Stuber, James; Quinley, Morgan; Melnik, Paul; Sieck, Paul; Smith, Trevor; Chun, Katherine; Woodruff, Simon
2016-10-01
Despite having mature designs, diagnostics are usually custom designed for each experiment. Most of the design can be now be automated to reduce costs (engineering labor, and capital cost). We present results from scripted physics modeling and parametric engineering design for common optical and mechanical components found in many plasma diagnostics and outline the process for automated design optimization that employs scripts to communicate data from online forms through proprietary and open-source CAD and FE codes to provide a design that can be sent directly to a printer. As a demonstration of design automation, an optical beam dump, baffle and optical components are designed via an automated process and printed. Supported by DOE SBIR Grant DE-SC0011858.
Application of Climate Impact Metrics to Rotorcraft Design
NASA Technical Reports Server (NTRS)
Russell, Carl; Johnson, Wayne
2013-01-01
Multiple metrics are applied to the design of large civil rotorcraft, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.
Application of Climate Impact Metrics to Civil Tiltrotor Design
NASA Technical Reports Server (NTRS)
Russell, Carl R.; Johnson, Wayne
2013-01-01
Multiple metrics are applied to the design of a large civil tiltrotor, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.
Path integral learning of multidimensional movement trajectories
NASA Astrophysics Data System (ADS)
André, João; Santos, Cristina; Costa, Lino
2013-10-01
This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.
Information transfer satellite concept study. Volume 4: computer manual
NASA Technical Reports Server (NTRS)
Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.
1971-01-01
The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.
The gate studies: Assessing the potential of future small general aviation turbine engines
NASA Technical Reports Server (NTRS)
Strack, W. C.
1979-01-01
Four studies were completed that explore the opportunities for future General Aviation turbine engines (GATE) in the 150-1000 SHP class. These studies forecasted the potential impact of advanced technology turbine engines in the post-1988 market, identified important aircraft and missions, desirable engine sizes, engine performance, and cost goals. Parametric evaluations of various engine cycles, configurations, design features, and advanced technology elements defined baseline conceptual engines for each of the important missions identified by the market analysis. Both fixed-wing and helicopter aircraft, and turboshaft, turboprop, and turbofan engines were considered. Sizable performance gains (e.g., 20% SFC decrease), and large engine cost reductions of sufficient magnitude to challenge the reciprocating engine in the 300-500 SHP class were predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Design of plywood and paper flywheel rotors
NASA Astrophysics Data System (ADS)
Erdman, A. G.; Hagen, D. L.; Gaff, S. A.
1982-05-01
Technical and economic design factors of cellulosic rotors are compared with conventional materials for stationary flywheel energy storage systems. Wood species, operation in a vacuum, assembly and costs of rotors are evaluated. Wound kraft paper, twine and plywood rotors are examined. Two hub attachments are designed. Support stiffness is shown to be constrained by the material strength, rotor configuration and speed ratio. Preliminary duration of load tests was performed on vacuum dried hexagonal birch plywood. Dynamic and static rotor hub fatigue equipment is designed. Moisture loss rates while vacuum drying plywood cylinders were measured, and the radial and axial diffusion coefficients were evaluated. Diffusion coefficients of epoxy coated plywood cylinders were also obtained. Economics of cellulosic and conventional rotors were examined. Plywood rotor manufacturing costs were evaluated. The optimum economic shape for laminated rotors is shown to be cylindrical. Vacuum container costs are parametrically derived and based on material properties and costs. Containment costs are significant and are included in comparisons. The optimum design stress and wound rotor configuration are calculated for seventeen examples. Plywood rotors appear to be marginally competitive with the steel hose wire or E glass rotors. High performance oriented kraft paper rotors potentially provide the lowest energy storage costs in stationary systems.
Vieux, Florent; Dubois, Christophe; Allegre, Laëtitia; Mandon, Lionel; Ciantar, Laurent; Darmon, Nicole
2013-01-01
To assess the impact on food-related cost of meals to fulfill the new compulsory dietary standards for primary schools in France. A descriptive study assessed the relationship between the level of compliance with the standards of observed school meals and their food-related cost. An analytical study assessed the cost of series of meals published in professional journals, and complying or not with new dietary standards. The costs were based on prices actually paid for food used to prepare school meals. Food-related cost of meals. Parametric and nonparametric tests from a total of 42 and 120 series of 20 meals in the analytical and descriptive studies, respectively. The descriptive study indicated that meeting the standards was not related to cost. The analytical study showed that fulfilling the frequency guidelines increased the cost, whereas fulfilling the portion sizes criteria decreased it. Series of meals fully respecting the standards (ie, frequency and portion sizes) cost significantly less (-0.10 €/meal) than series not fulfilling them, because the standards recommend smaller portion sizes. Introducing portion sizes rules in dietary standards for school catering may help increase dietary quality without increasing the food cost of meals. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Galván-Tejada, Carlos E.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L.
2017-01-01
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions. PMID:28216571
Galván-Tejada, Carlos E; Zanella-Calzada, Laura A; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L
2017-02-14
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.
2009-05-01
Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.
Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
Patient acceptance of non-invasive testing for fetal aneuploidy via cell-free fetal DNA.
Vahanian, Sevan A; Baraa Allaf, M; Yeh, Corinne; Chavez, Martin R; Kinzler, Wendy L; Vintzileos, Anthony M
2014-01-01
To evaluate factors associated with patient acceptance of noninvasive prenatal testing for trisomy 21, 18 and 13 via cell-free fetal DNA. This was a retrospective study of all patients who were offered noninvasive prenatal testing at a single institution from 1 March 2012 to 2 July 2012. Patients were identified through our perinatal ultrasound database; demographic information, testing indication and insurance coverage were compared between patients who accepted the test and those who declined. Parametric and nonparametric tests were used as appropriate. Significant variables were assessed using multivariate logistic regression. The value p < 0.05 was considered significant. Two hundred thirty-five patients were offered noninvasive prenatal testing. Ninety-three patients (40%) accepted testing and 142 (60%) declined. Women who accepted noninvasive prenatal testing were more commonly white, had private insurance and had more than one testing indication. There was no statistical difference in the number or the type of testing indications. Multivariable logistic regression analysis was then used to assess individual variables. After controlling for race, patients with public insurance were 83% less likely to accept noninvasive prenatal testing than those with private insurance (3% vs. 97%, adjusted RR 0.17, 95% CI 0.05-0.62). In our population, having public insurance was the factor most strongly associated with declining noninvasive prenatal testing.
Current status of nuclear cardiology: a limited review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botvinick, E.H.; Dae, M.; Hattner, R.S.
1985-11-01
To summarize the current status of nuclear cardiology, the authors will focus on areas that the emphasize the specific advantages of nuclear cardiology methods: (a) their benign, noninvasive nature, (b) their pathophysiologic nature, and (c) the ease of their computer manipulation and analysis, permitting quantitative evaluation. The areas covered include: (a) blood pool scintigraphy and parametric imaging, (b) pharmacologic intervention for the diagnosis of ischemic heart disease, (c) scintigraphic studies for the diagnosis and prognosis of coronary artery disease, and (d) considerations of cost effectiveness.
Analytical investigation of thermal barrier coatings on advanced power generation gas turbines
NASA Technical Reports Server (NTRS)
Amos, D. J.
1977-01-01
An analytical investigation of present and advanced gas turbine power generation cycles incorporating thermal barrier turbine component coatings was performed. Approximately 50 parametric points considering simple, recuperated, and combined cycles (including gasification) with gas turbine inlet temperatures from current levels through 1644K (2500 F) were evaluated. The results indicated that thermal barriers would be an attractive means to improve performance and reduce cost of electricity for these cycles. A recommended thermal barrier development program has been defined.
1988-09-30
DISTRiUTIONWtAVAu.ASiUTY OF REPORT 2b. DECLASSIFICATON I DOW’NGRADING SCHEDULE 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION...REPORT NUMBER(S) 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION WCW Associates, Inc. Battel le 6c. ADDRESS...Cycle Figure 8-6 Induced Change k Figure 8-7 The Culture- Performance Relationship Figure 8-8 Culture-Productivity Bridge vi Preface Our cultural
NASA Technical Reports Server (NTRS)
Duffy, James B.
1992-01-01
The report describes the work breakdown structure (WBS) and its associated WBS dictionary for task area 1 of contract NAS8-39207, advanced transportation system studies (ATSS). This WBS format is consistent with the preliminary design level of detail employed by both task area 1 and task area 4 in the ATSS study and is intended to provide an estimating structure for parametric cost estimates.
Study of thermal management for space platform applications
NASA Technical Reports Server (NTRS)
Oren, J. A.
1980-01-01
Techniques for the management of the thermal energy of large space platforms using many hundreds of kilowatts over a 10 year life span were evaluated. Concepts for heat rejection, heat transport within the vehicle, and interfacing were analyzed and compared. The heat rejection systems were parametrically weight optimized over conditions for heat pipe and pumped fluid approaches. Two approaches to achieve reliability were compared for: performance, weight, volume, projected area, reliability, cost, and operational characteristics. Technology needs are assessed and technology advancement recommendations are made.
Solar heating and cooling technical data and systems analysis
NASA Technical Reports Server (NTRS)
Christensen, D. L.
1976-01-01
The acquisition and processing of selected parametric data for inclusion in a computerized Data Base using the Marshall Information Retrieval and Data System (MIRADS) developed by NASA-MSFC is discussed. This data base provides extensive technical and socioeconomic information related to solar energy heating and cooling on a national scale. A broadly based research approach was used to assist in the support of program management and the application of a cost-effective program for solar energy development and demonstration.
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
Vasa previa screening strategies: a decision and cost-effectiveness analysis.
Sinkey, R G; Odibo, A O
2018-05-22
The aim of this study is to perform a decision and cost-effectiveness analysis comparing four screening strategies for the antenatal diagnosis of vasa previa among singleton pregnancies. A decision-analytic model was constructed comparing vasa previa screening strategies. Published probabilities and costs were applied to four transvaginal screening scenarios which occurred at the time of mid-trimester ultrasound: no screening, ultrasound-indicated screening, screening pregnancies conceived by in vitro fertilization (IVF), and universal screening. Ultrasound-indicated screening was defined as performing a transvaginal ultrasound at the time of routine anatomy ultrasound in response to one of the following sonographic findings associated with an increased risk of vasa previa: low-lying placenta, marginal or velamentous cord insertion, or bilobed or succenturiate lobed placenta. The primary outcome was cost per quality adjusted life years (QALY) in U.S. dollars. The analysis was from a healthcare system perspective with a willingness to pay (WTP) threshold of $100,000 per QALY selected. One-way and multivariate sensitivity analyses (Monte-Carlo simulation) were performed. This decision-analytic model demonstrated that screening pregnancies conceived by IVF was the most cost-effective strategy with an incremental cost effectiveness ratio (ICER) of $29,186.50 / QALY. Ultrasound-indicated screening was the second most cost-effective with an ICER of $56,096.77 / QALY. These data were robust to all one-way and multivariate sensitivity analyses performed. Within our baseline assumptions, transvaginal ultrasound screening for vasa previa appears to be most cost-effective when performed among IVF pregnancies. However, both IVF and ultrasound-indicated screening strategies fall within contemporary willingness-to-pay thresholds, suggesting that both strategies may be appropriate to apply in clinical practice. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Dor, Avi; Luo, Qian; Gerstein, Maya Tuchman; Malveaux, Floyd; Mitchell, Herman; Markus, Anne Rossier
We present an incremental cost-effectiveness analysis of an evidence-based childhood asthma intervention (Community Healthcare for Asthma Management and Prevention of Symptoms [CHAMPS]) to usual management of childhood asthma in community health centers. Data used in the analysis include household surveys, Medicaid insurance claims, and community health center expenditure reports. We combined our incremental cost-effectiveness analysis with a difference-in-differences multivariate regression framework. We found that CHAMPS reduced symptom days by 29.75 days per child-year and was cost-effective (incremental cost-effectiveness ratio: $28.76 per symptom-free days). Most of the benefits were due to reductions in direct medical costs. Indirect benefits from increased household productivity were relatively small.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Greenberg, Jeffrey D; Palmer, Jacqueline B; Li, Yunfeng; Herrera, Vivian; Tsang, Yuen; Liao, Minlei
2016-01-01
Direct costs of ankylosing spondylitis (AS) and psoriatic arthritis (PsA) have not been well characterized in the United States. This study assessed healthcare resource use and direct cost of AS and PsA, and identified predictors of all-cause medical and pharmacy costs. Adults aged ≥ 18 with a diagnosis of AS and PsA were identified in the MarketScan databases between October 1, 2011, and September 30, 2012. Patients were continuously enrolled with medical and pharmacy benefits for 12 months before and after the index date (first diagnosis). Baseline demographics and comorbidities were identified. Direct costs included hospitalizations, emergency room and office visits, and pharmacy costs. Multivariable regression was used to determine whether baseline covariates were associated with direct costs. Patients with AS were younger and mostly men compared with patients with PsA. Hypertension and hyperlipidemia were the most common comorbidities in both cohorts. A higher percentage of patients with PsA used biologics and nonbiologic disease-modifying drugs (61.1% and 52.4%, respectively) compared with patients with AS (52.5% and 21.8%, respectively). Office visits were the most commonly used resource by patients with AS and PsA (∼11 visits). Annual direct medical costs [all US dollars, mean (SD)] for patients with AS and PsA were $6514 ($32,982) and $5108 ($22,258), respectively. Prescription drug costs were higher for patients with PsA [$14,174 ($15,821)] compared with patients with AS [$11,214 ($14,249)]. Multivariable regression analysis showed higher all-cause direct costs were associated with biologic use, age, and increased comorbidities in patients with AS or PsA (all p < 0.05). Biologic use, age, and comorbidities were major determinants of all-cause direct costs in patients with AS and PsA.
Workers' compensation costs among construction workers: a robust regression analysis.
Friedman, Lee S; Forst, Linda S
2009-11-01
Workers' compensation data are an important source for evaluating costs associated with construction injuries. We describe the characteristics of injured construction workers filing claims in Illinois between 2000 and 2005 and the factors associated with compensation costs using a robust regression model. In the final multivariable model, the cumulative percent temporary and permanent disability-measures of severity of injury-explained 38.7% of the variance of cost. Attorney costs explained only 0.3% of the variance of the dependent variable. The model used in this study clearly indicated that percent disability was the most important determinant of cost, although the method and uniformity of percent impairment allocation could be better elucidated. There is a need to integrate analytical methods that are suitable for skewed data when analyzing claim costs.
Associations between host characteristics and antimicrobial resistance of Salmonella typhimurium.
Ruddat, I; Tietze, E; Ziehm, D; Kreienbrock, L
2014-10-01
A collection of Salmonella Typhimurium isolates obtained from sporadic salmonellosis cases in humans from Lower Saxony, Germany between June 2008 and May 2010 was used to perform an exploratory risk-factor analysis on antimicrobial resistance (AMR) using comprehensive host information on sociodemographic attributes, medical history, food habits and animal contact. Multivariate resistance profiles of minimum inhibitory concentrations for 13 antimicrobial agents were analysed using a non-parametric approach with multifactorial models adjusted for phage types. Statistically significant associations were observed for consumption of antimicrobial agents, region type and three factors on egg-purchasing behaviour, indicating that besides antimicrobial use the proximity to other community members, health consciousness and other lifestyle-related attributes may play a role in the dissemination of resistances. Furthermore, a statistically significant increase in AMR from the first study year to the second year was observed.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Favre, Anne-Catherine; Hingray, Benoit
2018-02-01
We present a multi-site stochastic model for the generation of average daily temperature, which includes a flexible parametric distribution and a multivariate autoregressive process. Different versions of this model are applied to a set of 26 stations located in Switzerland. The importance of specific statistical characteristics of the model (seasonality, marginal distributions of standardized temperature, spatial and temporal dependence) is discussed. In particular, the proposed marginal distribution is shown to improve the reproduction of extreme temperatures (minima and maxima). We also demonstrate that the frequency and duration of cold spells and heat waves are dramatically underestimated when the autocorrelation of temperature is not taken into account in the model. An adequate representation of these characteristics can be crucial depending on the field of application, and we discuss potential implications in different contexts (agriculture, forestry, hydrology, human health).
Spatial estimation from remotely sensed data via empirical Bayes models
NASA Technical Reports Server (NTRS)
Hill, J. R.; Hinkley, D. V.; Kostal, H.; Morris, C. N.
1984-01-01
Multichannel satellite image data, available as LANDSAT imagery, are recorded as a multivariate time series (four channels, multiple passovers) in two spatial dimensions. The application of parametric empirical Bayes theory to classification of, and estimating the probability of, each crop type at each of a large number of pixels is considered. This theory involves both the probability distribution of imagery data, conditional on crop types, and the prior spatial distribution of crop types. For the latter Markov models indexed by estimable parameters are used. A broad outline of the general theory reveals several questions for further research. Some detailed results are given for the special case of two crop types when only a line transect is analyzed. Finally, the estimation of an underlying continuous process on the lattice is discussed which would be applicable to such quantities as crop yield.
Moriarty, James P; Branda, Megan E; Olsen, Kerry D; Shah, Nilay D; Borah, Bijan J; Wagie, Amy E; Egginton, Jason S; Naessens, James M
2012-03-01
To provide the simultaneous 7-year estimates of incremental costs of smoking and obesity among employees and dependents in a large health care system. We used a retrospective cohort aged 18 years or older with continuous enrollment during the study period. Longitudinal multivariate cost analyses were performed using generalized estimating equations with demographic adjustments. The annual incremental mean costs of smoking by age group ranged from $1274 to $1401. The incremental costs of morbid obesity II by age group ranged from $5467 to $5530. These incremental costs drop substantially when comorbidities are included. Obesity and smoking have large long-term impacts on health care costs of working-age adults. Controlling comorbidities impacted incremental costs of obesity but may lead to underestimation of the true incremental costs because obesity is a risk factor for developing chronic conditions.
Fluoxetine and imipramine: are there differences in cost-utility for depression in primary care?
Serrano-Blanco, Antoni; Suárez, David; Pinto-Meza, Alejandra; Peñarrubia, Maria T; Haro, Josep Maria
2009-02-01
Depressive disorders generate severe personal burden and high economic costs. Cost-utility analyses of the different therapeutical options are crucial to policy-makers and clinicians. Previous cost-utility studies, comparing selective serotonin reuptake inhibitors and tricyclic antidepressants, have used modelling techniques or have not included indirect costs in the economic analyses. To determine the cost-utility of fluoxetine compared with imipramine for treating depressive disorders in primary care. A 6-month randomized prospective naturalistic study comparing fluoxetine with imipramine was conducted in three primary care centres in Spain. One hundred and three patients requiring antidepressant treatment for a DSM-IV depressive disorder were included in the study. Patients were randomized either to fluoxetine (53 patients) or to imipramine (50 patients) treatment. Patients were treated with antidepressants according to their general practitioner's usual clinical practice. Outcome measures were the quality of life tariff of the European Quality of Life Questionnaire: EuroQoL-5D (five domains), direct costs, indirect costs and total costs. Subjects were evaluated at the beginning of treatment and after 1, 3 and 6 months. Incremental cost-utility ratios (ICUR) were obtained. To address uncertainty in the ICUR's sampling distribution, non-parametric bootstrapping was carried out. Taking into account adjusted total costs and incremental quality of life gained, imipramine dominated fluoxetine with 81.5% of the bootstrap replications in the dominance quadrant. Imipramine seems to be a better cost-utility antidepressant option for treating depressive disorders in primary care.
Work Plan for an Inpatient Rehabilitation Prospective Payment System
2000-01-01
through 4.23 5 Nontraumatic spinal cord 4.1,4.11 through 4.13 6 Neurological 3.1,3.2,3.3,3.5,3.8,3.9 7 Hip fracture 8.11 through 8.4 8 Replacement of...Eilertsen, C. A. Hrincevich, D. A. Tropea, L. A. Ahmad, and D. G. Eckhoff, "Outcomes and Costs After Hip Fracture and Stroke: A Comparison of...Teaching Costs 53 Low Income Patients 55 Other Factors Affecting Cost 58 Multivariate Regression Analysis 59 8. SIMULATIONS AND IMPACT ANALYSES 60 Data
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Strategies that reduce 90-day readmissions and inpatient costs after liver transplantation.
Zeidan, Joseph H; Levi, David M; Pierce, Ruth; Russo, Mark W
2018-04-25
Liver transplantation is hospital-resource intensive and associated with high rates of readmission. We have previously shown a reduction in 30-day readmission rates by implementing a specifically designed protocol to increase access to outpatient care. To determine if strategies that reduce 30-day readmission after liver transplant were effective in also reducing 90-day readmission rates and costs. A protocol was developed to reduce inpatient readmissions after liver transplant that expanded outpatient services and provided alternatives to readmission. The 90-day readmission rates and costs were compared before and implementing strategies outlined in the protocol. Multivariable analysis was used to control for potential confounding factors. Over the study period 304 adult primary liver transplants were performed on patients with a median biologic MELD of 22. 112 (37%) patients were readmitted within 90 days of transplant. The readmission rates before and after implementation of the protocol were 53% and 26% respectively, p<0.001. The most common reason for readmission was elevated liver tests/rejection (24%). In multivariable analysis, the protocol remained associated with avoiding readmission, OR=0.33, [95% CI 0.20,0.55], p<0.001. The median length of stay after transplant preprotocol and postprotocol was 8 and 7 days, respectively. A greater proportion of patients were discharged to hospital lodging post protocol, 10% versus 19%, p=0.03. 90-day readmissions costs were reduced by 55% but total 90 day costs by only 2.7% due to higher outpatient costs and index admission costs. 90-day readmission rates and readmission costs can be reduced by improving access to outpatient services and hospital local lodging. Total 90-day costs were similar between the two groups because of higher outpatient costs after the protocol was introduced. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.
Linking the Weather Generator with Regional Climate Model: Effect of Higher Resolution
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin; Huth, Radan; Farda, Ales; Skalak, Petr
2014-05-01
This contribution builds on our last year EGU contribution, which followed two aims: (i) validation of the simulations of the present climate made by the ALADIN-Climate Regional Climate Model (RCM) at 25 km resolution, and (ii) presenting a methodology for linking the parametric weather generator (WG) with RCM output (aiming to calibrate a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations). Now we have available new higher-resolution (6.25 km) simulations with the same RCM. The main topic of this contribution is an anser to a following question: What is an effect of using a higher spatial resolution on a quality of simulating the surface weather characteristics? In the first part, the high resolution RCM simulation of the present climate will be validated in terms of selected WG parameters, which are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series. When comparing the WG parameters from the two sources (RCM vs observations), we interpolate the RCM-based parameters into the station locations while accounting for the effect of altitude. In the second part, we will discuss an effect of using the higher resolution: the results of the validation tests will be compared with those obtained with the lower-resolution RCM. Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).
Parametric study of different contributors to tumor thermal profile
NASA Astrophysics Data System (ADS)
Tepper, Michal; Gannot, Israel
2014-03-01
Treating cancer is one of the major challenges of modern medicine. There is great interest in assessing tumor development in in vivo animal and human models, as well as in in vitro experiments. Existing methods are either limited by cost and availability or by their low accuracy and reproducibility. Thermography holds the potential of being a noninvasive, low-cost, irradiative and easy-to-use method for tumor monitoring. Tumors can be detected in thermal images due to their relatively higher or lower temperature compared to the temperature of the healthy skin surrounding them. Extensive research is performed to show the validity of thermography as an efficient method for tumor detection and the possibility of extracting tumor properties from thermal images, showing promising results. However, deducing from one type of experiment to others is difficult due to the differences in tumor properties, especially between different types of tumors or different species. There is a need in a research linking different types of tumor experiments. In this research, parametric analysis of possible contributors to tumor thermal profiles was performed. The effect of tumor geometric, physical and thermal properties was studied, both independently and together, in phantom model experiments and computer simulations. Theoretical and experimental results were cross-correlated to validate the models used and increase the accuracy of simulated complex tumor models. The contribution of different parameters in various tumor scenarios was estimated and the implication of these differences on the observed thermal profiles was studied. The correlation between animal and human models is discussed.
Cost analysis of a coal-fired power plant using the NPV method
NASA Astrophysics Data System (ADS)
Kumar, Ravinder; Sharma, Avdhesh Kr.; Tewari, P. C.
2015-12-01
The present study investigates the impact of various factors affecting coal-fired power plant economics of 210 MW subcritical unit situated in north India for electricity generation. In this paper, the cost data of various units of thermal power plant in terms of power output capacity have been fitted using power law with the help of the data collected from a literature search. To have a realistic estimate of primary components or equipment, it is necessary to include the latest cost of these components. The cost analysis of the plant was carried out on the basis of total capital investment, operating cost and revenue. The total capital investment includes the total direct plant cost and total indirect plant cost. Total direct plant cost involves the cost of equipment (i.e. boiler, steam turbine, condenser, generator and auxiliary equipment including condensate extraction pump, feed water pump, etc.) and other costs associated with piping, electrical, civil works, direct installation cost, auxiliary services, instrumentation and controls, and site preparation. The total indirect plant cost includes the cost of engineering and set-up. The net present value method was adopted for the present study. The work presented in this paper is an endeavour to study the influence of some of the important parameters on the lifetime costs of a coal-fired power plant. For this purpose, parametric study with and without escalation rates for a period of 35 years plant life was evaluated. The results predicted that plant life, interest rate and the escalation rate were observed to be very sensitive on plant economics in comparison to other factors under study.
Mehra, Tarun; Moos, Rudolf M; Seifert, Burkhardt; Bopp, Matthias; Senn, Oliver; Simmen, Hans-Peter; Neuhaus, Valentin; Ciritsis, Bernhard
2017-12-01
The assessment of structural and potentially economic factors determining cost, treatment type, and inpatient mortality of traumatic hip fractures are important health policy issues. We showed that insurance status and treatment in university hospitals were significantly associated with treatment type (i.e., primary hip replacement), cost, and lower inpatient mortality respectively. The purpose of this study was to determine the influence of the structural level of hospital care and patient insurance type on treatment, hospitalization cost, and inpatient mortality in cases with traumatic hip fractures in Switzerland. The Swiss national medical statistic 2011-2012 was screened for adults with hip fracture as primary diagnosis. Gender, age, insurance type, year of discharge, hospital infrastructure level, length-of-stay, case weight, reason for discharge, and all coded diagnoses and procedures were extracted. Descriptive statistics and multivariate logistic regression with treatment by primary hip replacement as well as inpatient mortality as dependent variables were performed. We obtained 24,678 inpatient case records from the medical statistic. Hospitalization costs were calculated from a second dataset, the Swiss national cost statistic (7528 cases with hip fractures, discharged in 2012). Average inpatient costs per case were the highest for discharges from university hospitals (US$21,471, SD US$17,015) and the lowest in basic coverage hospitals (US$18,291, SD US$12,635). Controlling for other variables, higher costs for hip fracture treatment at university hospitals were significant in multivariate regression (p < 0.001). University hospitals had a lower inpatient mortality rate than full and basic care providers (2.8% vs. both 4.0%); results confirmed in our multivariate logistic regression analysis (odds ratio (OR) 1.434, 95% CI 1.127-1.824 and OR 1.459, 95% confidence interval (CI) 1.139-1.870 for full and basic coverage hospitals vs. university hospitals respectively). The proportion of privately insured varied between 16.0% in university hospitals and 38.9% in specialized hospitals. Private insurance had an OR of 1.419 (95% CI 1.306-1.542) in predicting treatment of a hip fracture with primary hip replacement. The seeming importance of insurance type on hip fracture treatment and the large inequity in the distribution of privately insured between provider types would be worth a closer look by the regulatory authorities. Better outcomes, i.e., lower mortality rates for hip fracture treatment in hospitals with a higher structural care level advocate centralization of care.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Survival advantage in black versus white men with CKD: effect of estimated GFR and case mix.
Kovesdy, Csaba P; Quarles, L Darryl; Lott, Evan H; Lu, Jun Ling; Ma, Jennie Z; Molnar, Miklos Z; Kalantar-Zadeh, Kamyar
2013-08-01
Black dialysis patients have significantly lower mortality compared with white patients, in contradistinction to the higher mortality seen in blacks in the general population. It is unclear whether a similar paradox exists in patients with non-dialysis-dependent chronic kidney disease (CKD), and if it does, what its underlying reasons are. Historical cohort. 518,406 white and 52,402 black male US veterans with non-dialysis-dependent CKD stages 3-5. Black race. We examined overall and CKD stage-specific all-cause mortality using parametric survival models. The effect of sociodemographic characteristics, comorbid conditions, and laboratory characteristics on the observed differences was explored in multivariable models. During a median follow-up of 4.7 years, 172,093 patients died (mortality rate, 71.0 [95% CI, 70.6-71.3] per 1,000 patient-years). Black race was associated with significantly lower crude mortality (HR, 0.95; 95% CI, 0.94-0.97; P < 0.001). The survival advantage was attenuated after adjustment for age (HR, 1.14; 95% CI, 1.12-1.16), but was magnified after full multivariable adjustment (HR, 0.72; 95% CI, 0.70-0.73; P < 0.001). The unadjusted survival advantage of blacks was more prominent in those with more advanced stages of CKD, but CKD stage-specific differences were attenuated by multivariable adjustment. Exclusively male patients. Black patients with CKD have lower mortality compared with white patients. The survival advantage seen in blacks is accentuated in patients with more advanced stages of CKD, which may be explained by changes in case-mix and laboratory characteristics occurring during the course of kidney disease. Published by Elsevier Inc. on behalf of the National Kidney Foundation, Inc.
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
Chrispin, Jonathan; Ipek, Esra Gucuk; Habibi, Mohammadali; Yang, Eunice; Spragg, David; Marine, Joseph E; Ashikaga, Hiroshi; Rickard, John; Berger, Ronald D; Zimmerman, Stefan L; Calkins, Hugh; Nazarian, Saman
2017-03-01
This study aims to examine the association of clinical co-morbidities with the presence of left atrial (LA) late gadolinium enhancement (LGE) on cardiac magnetic resonance (CMR). Previous studies have established the severity of LA LGE to be associated with atrial fibrillation (AF) recurrence following AF ablation. We sought to determine whether baseline clinical characteristics were associated with LGE extent among patients presenting for an initial AF ablation. The cohort consisted of 179 consecutive patients with no prior cardiac ablation procedures who underwent pre-procedure LGE-CMR. The extent of LA LGE for each patient was calculated using the image intensity ratio, normalized to the mean blood pool intensity, corresponding to a bipolar voltage ≤0.3 mV. The association of LGE extent with baseline clinical characteristics was examined using non-parametric and multivariable models. The mean age of the cohort was 60.9 ± 9.6 years and 128 (72%) were male. In total, 56 (31%) patients had persistent AF. The mean LA volume was 118.4 ± 41.6 mL, and the mean LA LGE extent was 14.1 ± 10.4%. There was no association with any clinical variables with LGE extent by quartiles in the multivariable model. Extent of LGE as a continuous variable was positively, but weakly associated with LA volume in a multivariable model adjusting for age, body mass index, AF persistence, and left ventricular ejection fraction (1.5% scar/mL, P = 0.038). In a cohort of patients presenting for initial AF ablation, the presence of pre-ablation LA LGE extent was weakly, but positively associated with increasing LA volume. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Survival Advantage in Black Versus White Men With CKD: Effect of Estimated GFR and Case Mix
Kovesdy, Csaba P.; Quarles, L. Darryl; Lott, Evan H.; Lu, Jun Ling; Ma, Jennie Z.; Molnar, Miklos Z.; Kalantar-Zadeh, Kamyar
2013-01-01
Background Black dialysis patients have significantly lower mortality compared to white patients, in contradistinction to the higher mortality seen in blacks in the general population. It is unclear if a similar paradox exists in non–dialysis-dependent CKD, and if it does, what its underlying reasons are. Study Design Historical cohort. Setting & Participants 518,406 white and 52,402 black male US veterans with non-dialysis dependent CKD stages 3–5. Predictor Black race. Outcomes & Measurements We examined overall and CKD stage-specific all-cause mortality using parametric survival models. The effect of sociodemographic characteristics, comorbidities and laboratory characteristics on the observed differences was explored in multivariable models. Results Over a median follow-up of 4.7 years 172,093 patients died (mortality rate, 71.0 [95% CI, 70.6–71.3] per 1000 patient-years). Black race was associated with significantly lower crude mortality (HR, 0.95; 95% CI, 0.94–0.97; p<0.001). The survival advantage was attenuated after adjustment for age (HR, 1.14; 95% CI, 1.12–1.16), but was even magnified after full multivariable adjustment (HR, 0.72; 95% CI, 0.70–0.73; p<0.001). The unadjusted survival advantage of blacks was more prominent in those with more advanced stages of CKD, but CKD stage-specific differences were attenuated by multivariable adjustment. Limitations Exclusively male patients. Conclusions Black patients with CKD have lower mortality compared to white patients. The survival advantage seen in blacks is accentuated in patients with more advanced stages of CKD, which may be explained by changes in case mix and laboratory characteristics occurring during the course of kidney disease. PMID:23369826
Quantifying the hidden healthcare cost of diabetes mellitus in Australian hospital patients.
Karahalios, Amalia; Somarajah, Gowri; Hamblin, Peter S; Karunajeewa, Harin; Janus, Edward D
2018-03-01
Diabetes mellitus in hospital inpatients is most commonly present as a comorbidity rather than as the primary diagnosis. In some hospitals, the prevalence of comorbid diabetes mellitus across all inpatients exceeds 30%, which could add to complexity of care and resource utilisation. However, whether and to what extent comorbid diabetes mellitus contributes indirectly to greater hospitalisation costs is ill-defined. To determine the attributable effect of comorbid diabetes mellitus on hospital resource utilisation in a General Internal Medical service in Melbourne, Australia. We extracted data from a database of all General Internal Medical discharge episodes from July 2012 to June 2013. We fitted multivariable regression models to compare patients with diabetes mellitus to those without diabetes mellitus with respect to hospitalisation cost, length of stay, admissions per year and inpatient mortality. Of 4657 patients 1519 (33%) had diabetes mellitus, for whom average hospitalisation cost (AUD9910) was higher than those without diabetes mellitus (AUD7805). In multivariable analysis, this corresponded to a 1.22-fold (95% confidence interval (CI) 1.12-1.33, P < 0.001) higher cost. Mean length of stay for those with diabetes was 8.2 days versus 6.8 days for those without diabetes, with an adjusted 1.19-fold greater odds (95% CI 1.06-1.33, P = 0.001) of staying an additional day. Number of admissions and mortality were similar. Comorbid diabetes mellitus adds significantly to hospitalisation duration and costs in medical inpatients. Moreover, diabetes mellitus patients with chronic complications had a greater-still cost and hospitalisation duration compared to those without diabetes mellitus. © 2017 Royal Australasian College of Physicians.
Lafeuille, Marie-Hélène; Grittner, Amanda Melina; Fortier, Jonathan; Muser, Erik; Fasteneau, John; Duh, Mei Sheng; Lefebvre, Patrick
2015-03-01
Comparative data on rehospitalization patterns and associated institutional costs after inpatient treatment with paliperidone palmitate or oral antipsychotic therapy are reported. A retrospective cohort study was conducted using discharge and billing records from a large hospital database. Selected clinical and cost outcomes were compared in a cohort of adult patients who received the long-acting antipsychotic paliperidone palmitate during a schizophrenia-related index hospital stay and a cohort of patients who received oral antipsychotic therapy during their index admission. Inverse probability-of-treatment weights based on propensity scores were used to reduce confounding. Rates of all-cause and schizophrenia-related rehospitalization and emergency room (ER) use in the two cohorts over periods of up to 12 months were analyzed using a multivariate Cox proportional hazard model. Institutional costs for the evaluated postdischarge events were compared via multivariate linear regression analysis. In the first 12 months after index hospital discharge, the risk of all-cause rehospitalization and ER use was significantly lower in the paliperidone palmitate cohort than in the oral antipsychotic cohort (hazard ratio, 0.61; 95% confidence interval [CI], 0.59-0.63; p < 0.0001); institutional costs during the first 6 months after discharge were significantly lower in the paliperidone palmitate cohort than in the comparator group (adjusted mean monthly cost difference -$404; 95% CI, -$781 to -$148; p < 0.0001). The use of paliperidone palmitate therapy during patients' index hospital admission for schizophrenia was associated with a reduced risk of hospital readmission or ER use and lower postdischarge institutional costs. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Schiff, J H; Frankenhauser, S; Pritsch, M; Fornaschon, S A; Snyder-Ramos, S A; Heal, C; Schmidt, K; Martin, E; Böttiger, B W; Motsch, J
2010-07-01
Anesthetic preoperative evaluation clinics (APECs) are relatively new institutions. Although cost effective, APECs have not been universally adopted in Europe. The aim of this study was to compare preoperative anesthetic assessment in wards with an APEC, assessing time, information gain, patient satisfaction and secondary costs. Two hundred and seven inpatients were randomized to be assessed at the APEC or on the ward by the same two senior anesthetists. The outcomes measured were the length of time for each consultation, the amount of information passed on to patients and the level of patient satisfaction. The consultation time was used to calculate impact on direct costs. A multivariate analysis was conducted to detect confounding variables. Ninety-four patients were seen in the APEC, and 78 were seen on the ward. The total time for the consultation was shorter for the APEC (mean 8.4 minutes [P<0.01]), and we calculated savings of 6.4 Euro per patient. More information was passed on to the patients seen in the APEC (P<0.01). The general satisfaction scores were comparable between groups. A multivariate analysis found that the consultation time was significantly influenced by the type of anesthesia, the magnitude of the operation and the location of the consultation. Gain in information was significantly influenced by age, education and the location of the visit. The APEC reduced consultation times and costs and had a positive impact on patient education. The cost savings are related to personnel costs and, therefore, are independent of other potential savings of an APEC, whereas global patient satisfaction remains unaltered.
NASA Astrophysics Data System (ADS)
Garagnani, S.; Manferdini, A. M.
2013-02-01
Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.; Gupta, H. V.; Xu, C.
2016-04-01
The flow duration curve is a signature catchment characteristic that depicts graphically the relationship between the exceedance probability of streamflow and its magnitude. This curve is relatively easy to create and interpret, and is used widely for hydrologic analysis, water quality management, and the design of hydroelectric power plants (among others). Several mathematical expressions have been proposed to mimic the FDC. Yet, these efforts have not been particularly successful, in large part because available functions are not flexible enough to portray accurately the functional shape of the FDC for a large range of catchments and contrasting hydrologic behaviors. Here, we extend the work of Vrugt and Sadegh (2013) and introduce several commonly used models of the soil water characteristic as new class of closed-form parametric expressions for the flow duration curve. These soil water retention functions are relatively simple to use, contain between two to three parameters, and mimic closely the empirical FDCs of 430 catchments of the MOPEX data set. We then relate the calibrated parameter values of these models to physical and climatological characteristics of the watershed using multivariate linear regression analysis, and evaluate the regionalization potential of our proposed models against those of the literature. If quality of fit is of main importance then the 3-parameter van Genuchten model is preferred, whereas the 2-parameter lognormal, 3-parameter GEV and generalized Pareto models show greater promise for regionalization.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
An economic systems analysis of land mobile radio telephone services
NASA Technical Reports Server (NTRS)
Leroy, B. E.; Stevenson, S. M.
1980-01-01
This paper deals with the economic interaction of the terrestrial and satellite land-mobile radio service systems. The cellular, trunked and satellite land-mobile systems are described. Parametric equations are formulated to allow examination of necessary user thresholds and growth rates as functions of system costs. Conversely, first order allowable systems costs are found as a function of user thresholds and growth rates. Transitions between satellite and terrestrial service systems are examined. User growth rate density (user/year/km squared) is shown to be a key parameter in the analysis of systems compatibility. The concept of system design matching the price demand curves is introduced and examples are given. The role of satellite systems is critically examined and the economic conditions necessary for the introduction of satellite service are identified.
Zhai, Haibo; Rubin, Edward S
2013-03-19
This study investigates the feasibility of polymer membrane systems for postcombustion carbon dioxide (CO(2)) capture at coal-fired power plants. Using newly developed performance and cost models, our analysis shows that membrane systems configured with multiple stages or steps are capable of meeting capture targets of 90% CO(2) removal efficiency and 95+% product purity. A combined driving force design using both compressors and vacuum pumps is most effective for reducing the cost of CO(2) avoided. Further reductions in the overall system energy penalty and cost can be obtained by recycling a portion of CO(2) via a two-stage, two-step membrane configuration with air sweep to increase the CO(2) partial pressure of feed flue gas. For a typical plant with carbon capture and storage, this yielded a 15% lower cost per metric ton of CO(2) avoided compared to a plant using a current amine-based capture system. A series of parametric analyses also is undertaken to identify paths for enhancing the viability of membrane-based capture technology.
SATCOM simulator speeds MSS deployment and lowers costs
NASA Technical Reports Server (NTRS)
Carey, Tim; Hassun, Roland; Koberstein, Dave
1993-01-01
Mobile satellite systems (MSS) are being proposed and licensed at an accelerating rate. How can the design, manufacture, and performance of these systems be optimized at costs that allow a reasonable return on investment? The answer is the use of system simulation techniques beginning early in the system design and continuing through integration, pre- and post-launch monitoring, and in-orbit monitoring. This paper focuses on using commercially available, validated simulation instruments to deliver accurate, repeatable, and cost effective measurements throughout the life of a typical mobile satellite system. A satellite communications test set is discussed that provides complete parametric test capability with a significant improvement in measurement speed for manufacturing, integration, and pre-launch and in-orbit testing. The test set can simulate actual up and down link traffic conditions to evaluate the effects of system impairments, propagation and multipath on bit error rate (BER), channel capacity and transponder and system load balancing. Using a standard set of commercial instruments to deliver accurate, verifiable measurements anywhere in the world speeds deployment, generates measurement confidence, and lowers total system cost.
Adopting a plant-based diet minimally increased food costs in WHEL Study.
Hyder, Joseph A; Thomson, Cynthia A; Natarajan, Loki; Madlensky, Lisa; Pu, Minya; Emond, Jennifer; Kealey, Sheila; Rock, Cheryl L; Flatt, Shirley W; Pierce, John P
2009-01-01
To assess the cost of adopting a plant-based diet. Breast cancer survivors randomized to dietary intervention (n=1109) or comparison (n=1145) group; baseline and 12-month data on diet and grocery costs. At baseline, both groups reported similar food costs and dietary intake. At 12 months, only the intervention group changed their diet (vegetable-fruit: 6.3 to 8.9 serv/d.; fiber: 21.6 to 29.8 g/d; fat: 28.2 to 22.3% of E). The intervention change was associated with a significant increase of $1.22/ person/week (multivariate model, P=0.027). A major change to a plant-based diet was associated with a minimal increase in grocery costs.
Hastrup, Lene Halling; Kronborg, Christian; Bertelsen, Mette; Jeppesen, Pia; Jorgensen, Per; Petersen, Lone; Thorup, Anne; Simonsen, Erik; Nordentoft, Merete
2013-01-01
Information about the cost-effectiveness of early intervention programmes for first-episode psychosis is limited. To evaluate the cost-effectiveness of an intensive early-intervention programme (called OPUS) (trial registration NCT00157313) consisting of enriched assertive community treatment, psychoeducational family treatment and social skills training for individuals with first-episode psychosis compared with standard treatment. An incremental cost-effectiveness analysis of a randomised controlled trial, adopting a public sector perspective was undertaken. The mean total costs of OPUS over 5 years (€123,683, s.e. = 8970) were not significantly different from that of standard treatment (€148,751, s.e. = 13073). At 2-year follow-up the mean Global Assessment of Functioning (GAF) score in the OPUS group (55.16, s.d. = 15.15) was significantly higher than in standard treatment group (51.13, s.d. = 15.92). However, the mean GAF did not differ significantly between the groups at 5-year follow-up (55.35 (s.d. = 18.28) and 54.16 (s.d. = 18.41), respectively). Cost-effectiveness planes based on non-parametric bootstrapping showed that OPUS was less costly and more effective in 70% of the replications. For a willingness-to-pay up to €50,000 the probability that OPUS was cost-effective was more than 80%. The incremental cost-effectiveness analysis showed that there was a high probability of OPUS being cost-effective compared with standard treatment.
McConnell, Eileen Diaz
2013-01-01
Housing costs are a substantial component of U.S. household expenditures. Those who allocate a large proportion of their income to housing often have to make difficult financial decisions with significant short-term and long-term implications for adults and children. This study employs cross-sectional data from the first wave of the Los Angeles Family and Neighborhood Survey (L.A.FANS) collected between 2000 and 2002 to examine the most common U.S. standard of housing affordability, the likelihood of spending thirty percent or more of income on shelter costs. Multivariate analyses of a low-income sample of U.S. born Latinos, Whites, African Americans, authorized Latino immigrants and unauthorized Latino immigrants focus on baseline and persistent differences in the likelihood of being cost burdened by race, nativity and legal status. Nearly half or more of each group of low-income respondents experience housing affordability problems. The results suggest that immigrants’ legal status is the primary source of disparities among those examined, with the multivariate analyses revealing large and persistent disparities for unauthorized Latino immigrants relative to most other groups. Moreover, the higher odds of housing cost burden observed for unauthorized immigrants compared with their authorized immigrant counterparts remains substantial, accounting for traditional indicators of immigrant assimilation. These results are consistent with emerging scholarship regarding the role of legal status in shaping immigrant outcomes in the United States. PMID:24077641
Rapid and Simultaneous Prediction of Eight Diesel Quality Parameters through ATR-FTIR Analysis.
Nespeca, Maurilio Gustavo; Hatanaka, Rafael Rodrigues; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2018-01-01
Quality assessment of diesel fuel is highly necessary for society, but the costs and time spent are very high while using standard methods. Therefore, this study aimed to develop an analytical method capable of simultaneously determining eight diesel quality parameters (density; flash point; total sulfur content; distillation temperatures at 10% (T10), 50% (T50), and 85% (T85) recovery; cetane index; and biodiesel content) through attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy and the multivariate regression method, partial least square (PLS). For this purpose, the quality parameters of 409 samples were determined using standard methods, and their spectra were acquired in ranges of 4000-650 cm -1 . The use of the multivariate filters, generalized least squares weighting (GLSW) and orthogonal signal correction (OSC), was evaluated to improve the signal-to-noise ratio of the models. Likewise, four variable selection approaches were tested: manual exclusion, forward interval PLS (FiPLS), backward interval PLS (BiPLS), and genetic algorithm (GA). The multivariate filters and variables selection algorithms generated more fitted and accurate PLS models. According to the validation, the FTIR/PLS models presented accuracy comparable to the reference methods and, therefore, the proposed method can be applied in the diesel routine monitoring to significantly reduce costs and analysis time.
Rapid and Simultaneous Prediction of Eight Diesel Quality Parameters through ATR-FTIR Analysis
Hatanaka, Rafael Rodrigues; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2018-01-01
Quality assessment of diesel fuel is highly necessary for society, but the costs and time spent are very high while using standard methods. Therefore, this study aimed to develop an analytical method capable of simultaneously determining eight diesel quality parameters (density; flash point; total sulfur content; distillation temperatures at 10% (T10), 50% (T50), and 85% (T85) recovery; cetane index; and biodiesel content) through attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy and the multivariate regression method, partial least square (PLS). For this purpose, the quality parameters of 409 samples were determined using standard methods, and their spectra were acquired in ranges of 4000–650 cm−1. The use of the multivariate filters, generalized least squares weighting (GLSW) and orthogonal signal correction (OSC), was evaluated to improve the signal-to-noise ratio of the models. Likewise, four variable selection approaches were tested: manual exclusion, forward interval PLS (FiPLS), backward interval PLS (BiPLS), and genetic algorithm (GA). The multivariate filters and variables selection algorithms generated more fitted and accurate PLS models. According to the validation, the FTIR/PLS models presented accuracy comparable to the reference methods and, therefore, the proposed method can be applied in the diesel routine monitoring to significantly reduce costs and analysis time. PMID:29629209
Rephasing invariant parametrization of flavor mixing
NASA Astrophysics Data System (ADS)
Lee, Tae-Hun
A new rephasing invariant parametrization for the 3 x 3 CKM matrix, called (x, y) parametrization, is introduced and the properties and applications of the parametrization are discussed. The overall phase condition leads this parametrization to have only six rephsing invariant parameters and two constraints. Its simplicity and regularity become apparent when it is applied to the one-loop RGE (renormalization group equations) for the Yukawa couplings. The implications of this parametrization for unification of the Yukawa couplings are also explored.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephan
2008-01-01
The upcoming retirement of the Baby Boomers will leave a workforce age gap between the younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of the workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This paper will dig into Productivity and Realization Factors and how they get applied to bi-monthly (payroll) data for true full-time equivalent (FTE) calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs analysis/solutions, from simple FTE cost-estimating relationships (CERs) versus CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of alternative performance-based CERs and how they get applied into the Center's forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and other internal home-grown cost models which may impact the productivity factors for future NASA missions.
Feasibility study of modern airships, phase 1. Volume 3: Historical overview (task 1)
NASA Technical Reports Server (NTRS)
Faurote, G. L.
1975-01-01
The history of lighter-than-air vehicles is reviewed in terms of providing a background for the mission analysis and parametric analysis tasks. Data from past airships and airship operations are presented in the following areas: (1) parameterization of design characteristics; (2) markets, missions, costs, and operating procedures, (3) indices of efficiency for comparison; (4) identification of critical design and operational characteristics; and (5) definition of the 1930 state-of-the-art and the 1974 state-of-the-art from a technical and economic standpoint.
1981-03-01
RD73 9. COST CODE: b. Sponsoring Agency: 27003 SUPPLY 50/2 10. IMPRINT: 11. COMPUTER PROGRAM(S) Aeronautical Research (Title(s) and language(s...laminates. 9/24 An advanced iso -parametric element is also being Jeveloped specifically for the analysis of disbonds and internal flaws in composite...FAILURE - STATION 119 iso I f FIG. 9.3 NOMAD STRLFCI URAl I AlT 10(L TESI FIG. 9.4 FAILED NOMAD STRUT UPPER END FITTING FIG. 9.5 FRACTURE FACES OF FAILED
Evaluation of automobiles with alternative fuels utilizing multicriteria techniques
NASA Astrophysics Data System (ADS)
Brey, J. J.; Contreras, I.; Carazo, A. F.; Brey, R.; Hernández-Díaz, A. G.; Castro, A.
This work applies the non-parametric technique of Data Envelopment Analysis (DEA) to conduct a multicriteria comparison of some existing and under development technologies in the automotive sector. The results indicate that some of the technologies under development, such as hydrogen fuel cell vehicles, can be classified as efficient when evaluated in function of environmental and economic criteria, with greater importance being given to the environmental criteria. The article also demonstrates the need to improve the hydrogen-based technology, in comparison with the others, in aspects such as vehicle sale costs and fuel price.
Concept definition study of small Brayton cycle engines for dispersed solar electric power systems
NASA Technical Reports Server (NTRS)
Six, L. D.; Ashe, T. L.; Dobler, F. X.; Elkins, R. T.
1980-01-01
Three first-generation Brayton cycle engine types were studied for solar application: a near-term open cycle (configuration A), a near-term closed cycle (configuration B), and a longer-term open cycle (configuration C). A parametric performance analysis was carried out to select engine designs for the three configurations. The interface requirements for the Brayton cycle engine/generator and solar receivers were determined. A technology assessment was then carried out to define production costs, durability, and growth potential for the selected engine types.
Resource costs for asthma-related care among pediatric patients in managed care.
Gendo, Karna; Sullivan, Sean D; Lozano, Paula; Finkelstein, Jonathan A; Fuhlbrigge, Anne; Weiss, Kevin B
2003-09-01
In 1998, the economic burden of asthma in the United States was estimated to be 12.7 billion dollars. Yet few studies have examined the relationship between the total costs of asthma-related care and measures of asthma morbidity. Understanding the relationship between total costs of asthma-related care and morbidity can assist in designing the most cost-effective asthma care strategies to improve patient outcomes and minimize total costs. To investigate correlates of asthma costs for children with mild-to-moderate persistent asthma and, specifically, to characterize how closely the percentage of predicted forced expiratory volume in 1 second (FEV1) and symptom days were correlated with costs of illness. A total of 638 parents and children with mild-to-moderate persistent asthma in 4 managed care delivery systems in 3 different US geographic regions were enrolled. Symptom burden and annual resource utilization were determined from reports of physician visits, hospitalizations, emergency department visits, medication use, and parental missed workdays. Spirometry was conducted on children who were 5 years and older. To characterize the relationship between symptom days and the percentage of predicted FEV1 with costs, we specified a multivariate regression model. The median total annual asthma-related cost for the group was 564 dollars (interquartile range [IQR], 131 dollars-1602 dollars). Indirect costs represented 54.6% of total costs. Medicines accounted for 52.6% of direct costs. The mean percentage of predicted FEV1 was 101.6% (range, 39.3%-183.5%; IQR, 91.6%-111.3%), with 91.4% of patients with a percentage of predicted FEV1 of more than 80%. Based on multivariate modeling, increasing asthma severity, use of peak expiratory flow rate meters, younger age, low-income status and nonwhite race, and longer duration of asthma were significantly associated with increasing cost. Symptom days (P < 0.001) predicted annual costs better than percentage of predicted FEV1 (P < 0.16) in this group of children. For the large number of children with mild-to-moderate persistent asthma and normal or near-normal lung function, symptom days are predictive of health care costs. For these insured children receiving care from 3 large managed care providers, low-income status and nonwhite race were the strongest correlates for increased asthma-related costs.
NASA Technical Reports Server (NTRS)
Eder, D.
1992-01-01
Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.
Applications of a High-Altitude Powered Platform (HAPP)
NASA Technical Reports Server (NTRS)
Kuhner, M. B.; Earhart, R. W.; Madigan, J. A.; Ruck, G. T.
1977-01-01
A list of potential uses for the (HAPP) and conceptual system designs for a small subset of the most promising applications were investigated. The method was to postulate a scenario for each application specifying a user, a set of system requirements and the most likely competitor among conventional aircraft and satellite systems. As part of the study of remote sensing applications, a parametric cost comparison was done between aircraft and HAPPS. For most remote sensing applications, aircraft can supply the same data as HAPPs at substantially lower cost. The critical parameters in determining the relative costs of the two systems are the sensor field of view and the required frequency of the observations being made. The HAPP is only competitive with an airplane when sensors having a very wide field of view are appropriate and when the phenomenon being observed must be viewed at least once per day. This eliminates the majority of remote sensing applications from any further consideration.
NASA Technical Reports Server (NTRS)
Hals, F. A.
1981-01-01
Plants with a nominal output of 200 and 500 MWe and conforming to the same design configuration as the Task II plant were investigated. This information is intended to permit an assessment of the competitiveness of first generation MHD/steam plants with conventional steam plants over the range of 200 to 1000 MWe. The results show that net plant efficiency of the MHD plant is significantly higher than a conventional steam plant of corresponding size. The cost of electricity is also less for the MHD plant over the entire plant size range. As expected, the cost differential is higher for the larger plant and decreases with plant size. Even at the 200 MWe capacity, however, the differential in COE between the MHD plant and the conventional plant is sufficient attractive to warrant serious consideration. Escalating fuel costs will enhance the competitive position of MHD plants because they can utilize the fuel more efficiently than conventional steam plants.
Final report on evaluation of cyclocraft support of oil and gas operations in wetland areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eggington, W.J.; Stevens, P.M.; John, C.J.
1994-10-01
The cyclocraft is a proven hybrid aircraft, capable of VTOL, lifting heavy and bulky loads, highly controllable, having high safety characteristics and low operating costs. Mission Research Corporation (MRC), under Department of Energy sponsorship, is evaluating the potential use of cyclocraft in the transport of drill rigs, mud, pipes and other materials and equipment, in a cost effective and environmentally safe manner, to support oil and gas drilling, production, and transportation operations in wetland areas. Based upon the results of an earlier parametric study, a cyclocraft design, having a payload capacity of 45 tons and designated H.1 Cyclocraft, was selectedmore » for further study, including the preparation of a preliminary design and a development plan, and the determination of operating costs. This report contains all of the results derived from the program to evaluate the use of cyclocraft in the support of oil and gas drilling and production operations in wetland areas.« less
Development of a solar-powered residential air conditioner: Screening analysis
NASA Technical Reports Server (NTRS)
1975-01-01
Screening analysis aimed at the definition of an optimum configuration of a Rankine cycle solar-powered air conditioner designed for residential application were conducted. Initial studies revealed that system performance and cost were extremely sensitive to condensing temperature and to the type of condenser used in the system. Consequently, the screening analyses were concerned with the generation of parametric design data for different condenser approaches; i. e., (1) an ambient air condenser, (2) a humidified ambient air condenser (3) an evaporative condenser, and (4) a water condenser (with a cooling tower). All systems feature a high performance turbocompressor and a single refrigerant (R-11) for the power and refrigeration loops. Data were obtained by computerized methods developed to permit system characterization over a broad range of operating and design conditions. The criteria used for comparison of the candidate system approaches were (1) overall system COP (refrigeration effect/solar heat input), (2) auxiliary electric power for fans and pumps, and (3) system installed cost or cost to the user.
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Parametric study of potential early commercial MHD power plants
NASA Technical Reports Server (NTRS)
Hals, F. A.
1979-01-01
Three different reference power plant configurations were considered with parametric variations of the various design parameters for each plant. Two of the reference plant designs were based on the use of high temperature regenerative air preheaters separately fired by a low Btu gas produced from a coal gasifier which was integrated with the power plant. The third reference plant design was based on the use of oxygen enriched combustion air preheated to a more moderate temperature in a tubular type metallic recuperative heat exchanger which is part of the bottoming plant heat recovery system. Comparative information was developed on plant performance and economics. The highest net plant efficiency of about 45 percent was attained by the reference plant design with the use of a high temperature air preheater separately fired with the advanced entrained bed gasifier. The use of oxygen enrichment of the combustion air yielded the lowest cost of generating electricity at a slightly lower plant efficiency. Both of these two reference plant designs are identified as potentially attractive for early MHD power plant applications.
Femtosecond OPO based on MgO:PPLN synchronously pumped by a 532 nm fiber laser
NASA Astrophysics Data System (ADS)
Cao, Jianjun; Shen, Dongyi; Zheng, Yuanlin; Feng, Yaming; Kong, Yan; Wan, Wenjie
2017-05-01
With the rapid progress in fiber technologies, femtosecond fiber lasers, which are compact, cost-effective and stable, have been developed and are commercially available. Studies of optical parametric oscillators (OPOs) pumped by this type of laser are demanding. Here we report a femtosecond optical parametric oscillator (OPO) at 79.6 MHz repetition rate based on MgO-doped periodically poled LiNbO3 (MgO:PPLN), synchronously pumped by the integrated second harmonic radiation of a femtosecond fiber laser at 532 nm. The signal delivered by the single resonant OPO is continuously tunable from 757 to 797 nm by tuning the crystal temperature in a poling period of 7.7 μ \\text{m} . The output signal shows good beam quality in TEM00 mode profile with pulse duration of 206 fs at 771 nm. Maximum output signal power of 71 mW is obtained for a pump power of 763 mW and a low pumping threshold of 210 mW is measured. Moreover, grating tuning and cavity length tuning of the signal wavelength are also investigated.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
The Nimrod computational workbench: a case study in desktop metacomputing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramson, D.; Sosic, R.; Foster, I.
The coordinated use of geographically distributed computers, or metacomputing, can in principle provide more accessible and cost- effective supercomputing than conventional high-performance systems. However, we lack evidence that metacomputing systems can be made easily usable, or that there exist large numbers of applications able to exploit metacomputing resources. In this paper, we present work that addresses both these concerns. The basis for this work is a system called Nimrod that provides a desktop problem-solving environment for parametric experiments. We describe how Nimrod has been extended to support the scheduling of computational resources located in a wide-area environment, and report onmore » an experiment in which Nimrod was used to schedule a large parametric study across the Australian Internet. The experiment provided both new scientific results and insights into Nimrod capabilities. We relate the results of this experiment to lessons learned from the I-WAY distributed computing experiment, and draw conclusions as to how Nimrod and I-WAY- like computing environments should be developed to support desktop metacomputing.« less
Castillo, Maria Isabel; Larsen, Emily; Cooke, Marie; Marsh, Nicole M; Wallis, Marianne C; Finucane, Julie; Brown, Peter; Mihala, Gabor; Carr, Peter J; Byrnes, Joshua; Walker, Rachel; Cable, Prudence; Zhang, Li; Sear, Candi; Jackson, Gavin; Rowsome, Anna; Ryan, Alison; Humphries, Julie C; Sivyer, Susan; Flanigan, Kathy; Rickard, Claire M
2018-05-14
Peripheral intravenous catheters (PIVCs) are frequently used in hospitals. However, PIVC complications are common, with failures leading to treatment delays, additional procedures, patient pain and discomfort, increased clinician workload and substantially increased healthcare costs. Recent evidence suggests integrated PIVC systems may be more effective than traditional non-integrated PIVC systems in reducing phlebitis, infiltration and costs and increasing functional dwell time. The study aim is to determine the efficacy, cost-utility and acceptability to patients and professionals of an integrated PIVC system compared with a non-integrated PIVC system. Two-arm, multicentre, randomised controlled superiority trial of integrated versus non-integrated PIVC systems to compare effectiveness on clinical and economic outcomes. Recruitment of 1560 patients over 2 years, with randomisation by a centralised service ensuring allocation concealment. Primary outcomes: catheter failure (composite endpoint) for reasons of: occlusion, infiltration/extravasation, phlebitis/thrombophlebitis, dislodgement, localised or catheter-associated bloodstream infections. first time insertion success, types of PIVC failure, device colonisation, insertion pain, functional dwell time, adverse events, mortality, cost-utility and consumer acceptability. One PIVC per patient will be included, with intention-to-treat analysis. Baseline group comparisons will be made for potentially clinically important confounders. The proportional hazards assumption will be checked, and Cox regression will test the effect of group, patient, device and clinical variables on failure. An as-treated analysis will assess the effect of protocol violations. Kaplan-Meier survival curves with log-rank tests will compare failure by group over time. Secondary endpoints will be compared between groups using parametric/non-parametric techniques. Ethical approval from the Royal Brisbane and Women's Hospital Human Research Ethics Committee (HREC/16/QRBW/527), Griffith University Human Research Ethics Committee (Ref No. 2017/002) and the South Metropolitan Health Services Human Research Ethics Committee (Ref No. 2016-239). Results will be published in peer-reviewed journals. ACTRN12617000089336. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Advanced Technology Lifecycle Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Mankins, John C.
2004-01-01
Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is satisfied with the system configurations, technology portfolios, and deployment strategies, he or she can present the concepts to a team, which will conduct a detailed, discipline-oriented analysis within a CEE. An analog to this approach is the music industry where a songwriter creates the lyrics and music before entering a recording studio.
Testing the causality of Hawkes processes with time reversal
NASA Astrophysics Data System (ADS)
Cordi, Marcus; Challet, Damien; Muni Toke, Ioane
2018-03-01
We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
Design and experiment of data-driven modeling and flutter control of a prototype wing
NASA Astrophysics Data System (ADS)
Lum, Kai-Yew; Xu, Cai-Lin; Lu, Zhenbo; Lai, Kwok-Leung; Cui, Yongdong
2017-06-01
This paper presents an approach for data-driven modeling of aeroelasticity and its application to flutter control design of a wind-tunnel wing model. Modeling is centered on system identification of unsteady aerodynamic loads using computational fluid dynamics data, and adopts a nonlinear multivariable extension of the Hammerstein-Wiener system. The formulation is in modal coordinates of the elastic structure, and yields a reduced-order model of the aeroelastic feedback loop that is parametrized by airspeed. Flutter suppression is thus cast as a robust stabilization problem over uncertain airspeed, for which a low-order H∞ controller is computed. The paper discusses in detail parameter sensitivity and observability of the model, the former to justify the chosen model structure, and the latter to provide a criterion for physical sensor placement. Wind tunnel experiments confirm the validity of the modeling approach and the effectiveness of the control design.
A cost-consequences analysis of an adherence focused pharmacist-led medication review service.
Desborough, James A; Sach, Tracey; Bhattacharya, Debi; Holland, Richard C; Wright, David J
2012-02-01
The aim of this project was to conduct an economic evaluation of the Norfolk Medicines Support Service (NMSS), a pharmacist-led medication review service for patients identified in primary care as non-adherent. The cost-consequences analysis was based on a before and after evaluation of the NMSS. Participants completed a self-reported adherence and health-related quality of life questionnaire prior to the review, at 6 weeks and 6 months. Service provision, prescribing and secondary care costs were considered and the mean cost before and after the intervention was calculated. One-hundred and seventeen patients were included in the evaluation. The mean cost per patient of prescribing and hospital admissions in the 6 months prior to the intervention was £2190 and in the 6 months after intervention £1883. This equates to a mean cost saving of £307 per patient (parametric 95% confidence interval: £1269 to £655). The intervention reduced emergency hospital admissions and increased medication adherence but no significant change in health-related quality of life was observed. The costs of providing this medication review service were offset by the reduction in emergency hospital admissions and savings in medication cost, assuming the findings of the evaluation were real and the regression to the mean phenomenon was not involved. This cost-consequences approach provides a transparent descriptive summary for decision-makers to use as the basis for resource allocation decisions. © 2011 The Authors. IJPP © 2011 Royal Pharmaceutical Society.
An Application of Discriminant Analysis to the Selection of Software Cost Estimating Models.
1984-09-01
the PRICE S Users Manual (29:111-25) was used with a slight modification. Based on the experience and advice of Captain Joe Dean, Electronic System...this study, and EXP is the expansion factor listed in the PRICE S User’s Manual . Another important factor needing explanation is development cost...coefficients and a unique constant. According to the SPSS manual (26:445) "Under the assumption of a multivariate normal distribution, the
Henke, Rachel M; Carls, Ginger S; Short, Meghan E; Pei, Xiaofei; Wang, Shaohung; Moley, Susan; Sullivan, Mark; Goetzel, Ron Z
2010-05-01
To evaluate relationships between modifiable health risks and costs and measure potential cost savings from risk reduction programs. Health risk information from active Pepsi Bottling Group employees who completed health risk assessments between 2004 and 2006 (N = 11,217) were linked to medical care, workers' compensation, and short-term disability cost data. Ten health risks were examined. Multivariate analyses were performed to estimate costs associated with having high risk, holding demographics, and other risks constant. Potential savings from risk reduction were estimated. High risk for weight, blood pressure, glucose, and cholesterol had the greatest impact on total costs. A one-percentage point annual reduction in the health risks assessed would yield annual per capita savings of $83.02 to $103.39. Targeted programs that address modifiable health risks are expected to produce substantial cost reductions in multiple benefit categories.
Lahiri, Supriya; Tempesti, Tommaso; Gangopadhyay, Somnath
2016-02-01
To estimate cost-effectiveness ratios and net costs of a training intervention to reduce morbidity among porters who carry loads without mechanical assistance in a developing country informal sector setting. Pre- and post-intervention survey data (n = 100) were collected in a prospective study: differences in physical/mental composite scores and pain scale scores were computed. Costs and economic benefits of the intervention were monetized with a net-cost model. Significant changes in physical composite scores (2.5), mental composite scores (3.2), and pain scale scores (-1.0) led to cost-effectiveness ratios of $6.97, $5.41, and $17.91, respectively. Multivariate analysis showed that program adherence enhanced effectiveness. The net cost of the intervention was -$5979.00 due to a reduction in absenteeism. Workplace ergonomic training is cost-effective and should be implemented wherein other engineering-control interventions are precluded due to infrastructural constraints.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephen
2008-01-01
The upcoming retirement of the Baby Boomers on the horizon will leave a performance gap between younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This papers will dig into Productivity and Realization Factors and how they get applied to bimonthly (payroll data) for true FTE calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs solutions, from simple - full time equivalent (FTE) cost estimating relationships CERs, to complex - CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of different alternatives performance based cost estimating relationships (CERs) and how they get applied into the Center s forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and internal home-grown cost models which may impact the productivity factors for future NASA missions.
Evaluation of natural mandibular shape asymmetry: an approach by using elliptical Fourier analysis.
Niño-Sandoval, Tania C; Morantes Ariza, Carlos F; Infante-Contreras, Clementina; Vasconcelos, Belmiro Ce
2018-04-05
The purpose of this study was to demonstrate that asymmetry is a natural occurring phenomenon in the mandibular shape by using elliptical Fourier analysis. 164 digital orthopantomographs from Colombian patients of both sexes aged 18 to 25 years were collected. Curves from left and right hemimandible were digitized. An elliptical Fourier analysis was performed with 20 harmonics. In the general sexual dimorphism a principal component analysis (PCA) and a hotelling T 2 from the multivariate warp space were employed. Exploratory analysis of general asymmetry and sexual dimorphism by side was made with a Procrustes Fit. A non-parametric multivariate analysis of variance (MANOVA) was applied to assess differentiation of skeletal classes of each hemimandible, and a Procrustes analysis of variance (ANOVA) was applied to search any relation between skeletal class and side in both sexes. Significant values were found in general asymmetry, general sexual dimorphism, in dimorphism by side (p < 0.0001), asymmetry by sex, and differences between Class I, II, and III (p < 0.005). However, a relation of skeletal classes and side was not found. The mandibular asymmetry by shape is present in all patients and should not be articulated exclusively to pathological processes, therefore, along with sexual dimorphism and differences between skeletal classes must be taken into account for improving mandibular prediction systems.
D'Ambrosio, Alessandro; Pagani, Elisabetta; Riccitelli, Gianna C; Colombo, Bruno; Rodegher, Mariaemma; Falini, Andrea; Comi, Giancarlo; Filippi, Massimo; Rocca, Maria A
2017-08-01
To investigate the role of cerebellar sub-regions on motor and cognitive performance in multiple sclerosis (MS) patients. Whole and sub-regional cerebellar volumes, brain volumes, T2 hyperintense lesion volumes (LV), and motor performance scores were obtained from 95 relapse-onset MS patients and 32 healthy controls (HC). MS patients also underwent an evaluation of working memory and processing speed functions. Cerebellar anterior and posterior lobes were segmented using the Spatially Unbiased Infratentorial Toolbox (SUIT) from Statistical Parametric Mapping (SPM12). Multivariate linear regression models assessed the relationship between magnetic resonance imaging (MRI) measures and motor/cognitive scores. Compared to HC, only secondary progressive multiple sclerosis (SPMS) patients had lower cerebellar volumes (total and posterior cerebellum). In MS patients, lower anterior cerebellar volume and brain T2 LV predicted worse motor performance, whereas lower posterior cerebellar volume and brain T2 LV predicted poor cognitive performance. Global measures of brain volume and infratentorial T2 LV were not selected by the final multivariate models. Cerebellar volumetric abnormalities are likely to play an important contribution to explain motor and cognitive performance in MS patients. Consistently with functional mapping studies, cerebellar posterior-inferior volume accounted for variance in cognitive measures, whereas anterior cerebellar volume accounted for variance in motor performance, supporting the assessment of cerebellar damage at sub-regional level.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the shortcomings of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-the-art in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority weaknesses within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
Kincaid, D Lawrence; Do, Mai Phuong
2006-01-01
Cost-effectiveness analysis is based on a simple formula. A dollar estimate of the total cost to conduct a program is divided by the number of people estimated to have been affected by it in terms of some intended outcome. The direct, total costs of most communication campaigns are usually available. Estimating the amount of effect that can be attributed to the communication alone, however is problematical in full-coverage, mass media campaigns where the randomized control group design is not feasible. Single-equation, multiple regression analysis controls for confounding variables but does not adequately address the issue of causal attribution. In this article, multivariate causal attribution (MCA) methods are applied to data from a sample survey of 1,516 married women in the Philippines to obtain a valid measure of the number of new adopters of modern contraceptives that can be causally attributed to a national mass media campaign and to calculate its cost-effectiveness. The MCA analysis uses structural equation modeling to test the causal pathways and to test for endogeneity, biprobit analysis to test for direct effects of the campaign and endogeneity, and propensity score matching to create a statistically equivalent, matched control group that approximates the results that would have been obtained from a randomized control group design. The MCA results support the conclusion that the observed, 6.4 percentage point increase in modern contraceptive use can be attributed to the national mass media campaign and to its indirect effects on attitudes toward contraceptives. This net increase represented 348,695 new adopters in the population of married women at a cost of U.S. $1.57 per new adopter.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yang, Si-Gang; Wang, Xiao-Jian; Gou, Dou-Dou; Chen, Hong-Wei; Chen, Ming-Hua; Xie, Shi-Zhong
2014-01-01
We report the experimental demonstration of the optical parametric gain generation in the 1 μm regime based on a photonic crystal fiber (PCF) with a zero group velocity dispersion (GVD) wavelength of 1062 nm pumped by a homemade tunable picosecond mode-locked ytterbium-doped fiber laser. A broad parametric gain band is obtained by pumping the PCF in the anomalous GVD regime with a relatively low power. Two separated narrow parametric gain bands are observed by pumping the PCF in the normal GVD regime. The peak of the parametric gain profile can be tuned from 927 to 1038 nm and from 1099 to 1228 nm. This widely tunable parametric gain band can be used for a broad band optical parametric amplifier, large span wavelength conversion or a tunable optical parametric oscillator.
Rickard, Claire M; Marsh, Nicole M; Webster, Joan; Gavin, Nicole C; Chan, Raymond J; McCarthy, Alexandra L; Mollee, Peter; Ullman, Amanda J; Kleidon, Tricia; Chopra, Vineet; Zhang, Li; McGrail, Matthew R; Larsen, Emily; Choudhury, Md Abu; Keogh, Samantha; Alexandrou, Evan; McMillan, David J; Mervin, Merehau Cindy; Paterson, David L; Cooke, Marie; Ray-Barruel, Gillian; Castillo, Maria Isabel; Hallahan, Andrew; Corley, Amanda; Geoffrey Playford, E
2017-06-15
Around 30% of peripherally inserted central catheters (PICCs) fail from vascular, infectious or mechanical complications. Patients with cancer are at highest risk, and this increases morbidity, mortality and costs. Effective PICC dressing and securement may prevent PICC failure; however, no large randomised controlled trial (RCT) has compared alternative approaches. We designed this RCT to assess the clinical and cost-effectiveness of dressing and securements to prevent PICC failure. Pragmatic, multicentre, 2×2 factorial, superiority RCT of (1) dressings (chlorhexidine gluconate disc (CHG) vs no disc) and (2) securements (integrated securement dressing (ISD) vs securement device (SED)). A qualitative evaluation using a knowledge translation framework is included. Recruitment of 1240 patients will occur over 3 years with allocation concealment until randomisation by a centralised service. For the dressing hypothesis, we hypothesise CHG discs will reduce catheter-associated bloodstream infection (CABSI) compared with no CHG disc. For the securement hypothesis, we hypothesise that ISD will reduce composite PICC failure (infection (CABSI/local infection), occlusion, dislodgement or thrombosis), compared with SED. types of PICC failure; safety; costs; dressing/securement failure; dwell time; microbial colonisation; reversible PICC complications and consumer acceptability. Relative incidence rates of CABSI and PICC failure/100 devices and/1000 PICC days (with 95% CIs) will summarise treatment impact. Kaplan-Meier survival curves (and log rank Mantel-Haenszel test) will compare outcomes over time. Secondary end points will be compared between groups using parametric/non-parametric techniques; p values <0.05 will be considered to be statistically significant. Ethical approval from Queensland Health (HREC/15/QRCH/241) and Griffith University (Ref. No. 2016/063). Results will be published. Trial registration number is: ACTRN12616000315415. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachid B. Slimane; Francis S. Lau; Javad Abbasian
2000-10-01
The objective of this program is to develop an economical process for hydrogen production, with no additional carbon dioxide emission, through the thermal decomposition of hydrogen sulfide (H{sub 2}S) in H{sub 2}S-rich waste streams to high-purity hydrogen and elemental sulfur. The novel feature of the process being developed is the superadiabatic combustion (SAC) of part of the H{sub 2}S in the waste stream to provide the thermal energy required for the decomposition reaction such that no additional energy is required. The program is divided into two phases. In Phase 1, detailed thermochemical and kinetic modeling of the SAC reactor withmore » H{sub 2}S-rich fuel gas and air/enriched air feeds is undertaken to evaluate the effects of operating conditions on exit gas products and conversion efficiency, and to identify key process parameters. Preliminary modeling results are used as a basis to conduct a thorough evaluation of SAC process design options, including reactor configuration, operating conditions, and productivity-product separation schemes, with respect to potential product yields, thermal efficiency, capital and operating costs, and reliability, ultimately leading to the preparation of a design package and cost estimate for a bench-scale reactor testing system to be assembled and tested in Phase 2 of the program. A detailed parametric testing plan was also developed for process design optimization and model verification in Phase 2. During Phase 2 of this program, IGT, UIC, and industry advisors UOP and BP Amoco will validate the SAC concept through construction of the bench-scale unit and parametric testing. The computer model developed in Phase 1 will be updated with the experimental data and used in future scale-up efforts. The process design will be refined and the cost estimate updated. Market survey and assessment will continue so that a commercial demonstration project can be identified.« less
Appolloni, L; Sandulli, R; Vetrano, G; Russo, G F
2018-05-15
Marine Protected Areas are considered key tools for conservation of coastal ecosystems. However, many reserves are characterized by several problems mainly related to inadequate zonings that often do not protect high biodiversity and propagule supply areas precluding, at the same time, economic important zones for local interests. The Gulf of Naples is here employed as a study area to assess the effects of inclusion of different conservation features and costs in reserve design process. In particular eight scenarios are developed using graph theory to identify propagule source patches and fishing and exploitation activities as costs-in-use for local population. Scenarios elaborated by MARXAN, software commonly used for marine conservation planning, are compared using multivariate analyses (MDS, PERMANOVA and PERMDISP) in order to assess input data having greatest effects on protected areas selection. MARXAN is heuristic software able to give a number of different correct results, all of them near to the best solution. Its outputs show that the most important areas to be protected, in order to ensure long-term habitat life and adequate propagule supply, are mainly located around the Gulf islands. In addition through statistical analyses it allowed us to prove that different choices on conservation features lead to statistically different scenarios. The presence of propagule supply patches forces MARXAN to select almost the same areas to protect decreasingly different MARXAN results and, thus, choices for reserves area selection. The multivariate analyses applied here to marine spatial planning proved to be very helpful allowing to identify i) how different scenario input data affect MARXAN and ii) what features have to be taken into account in study areas characterized by peculiar biological and economic interests. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cost-Performance Parametrics for Transporting Small Packages to the Mars Vicinity
NASA Technical Reports Server (NTRS)
McCleskey, C.; Lepsch, Roger A.; Martin, J.; Popescu, M.
2015-01-01
This paper explores the costs and performance required to deliver a small-sized payload package (CubeSat-sized, for instance) to various transportation nodes en route to Mars and near-Mars destinations (such as Mars moons, Phobos and Deimos). Needed is a contemporary assessment and summary compilation of transportation metrics that factor both performance and affordability of modern and emerging delivery capabilities. The paper brings together: (a) required mass transport gear ratios in delivering payload from Earths surface to the Mars vicinity, (b) the cyclical energy required for delivery, and (c) the affordability and availability of various means of transporting material across various Earth-Moon vicinity and Near-Mars vicinity nodes relevant to Mars transportation. Examples for unit deliveries are computed and tabulated, using a CubeSat as a unit, for periodic near-Mars delivery campaign scenarios.
Climate change and vector-borne diseases: an economic impact analysis of malaria in Africa.
Egbendewe-Mondzozo, Aklesso; Musumba, Mark; McCarl, Bruce A; Wu, Ximing
2011-03-01
A semi-parametric econometric model is used to study the relationship between malaria cases and climatic factors in 25 African countries. Results show that a marginal change in temperature and precipitation levels would lead to a significant change in the number of malaria cases for most countries by the end of the century. Consistent with the existing biophysical malaria model results, the projected effects of climate change are mixed. Our model projects that some countries will see an increase in malaria cases but others will see a decrease. We estimate projected malaria inpatient and outpatient treatment costs as a proportion of annual 2000 health expenditures per 1,000 people. We found that even under minimal climate change scenario, some countries may see their inpatient treatment cost of malaria increase more than 20%.
Modeling integrated water user decisions in intermittent supply systems
NASA Astrophysics Data System (ADS)
Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.
2007-07-01
We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.
A study on technical efficiency of a DMU (review of literature)
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Sankar, J. Ravi
2017-11-01
In this research paper the concept of technical efficiency (due to Farell) [1] of a decision making unit (DMU) has been introduced and the measure of technical and cost efficiencies are derived. Timmer’s [2] deterministic approach to estimate the Cobb-Douglas production frontier has been proposed. The idea of extension of Timmer’s [2] method to any production frontier which is linear in parameters has been presented here. The estimation of parameters of Cobb-Douglas production frontier by linear programming approach has been discussed in this paper. Mark et al. [3] proposed a non-parametric method to assess efficiency. Nuti et al. [4] investigated the relationships among technical efficiency scores, weighted per capita cost and overall performance Gahe Zing Samuel Yank et al. [5] used Data envelopment analysis to assess technical assessment in banking sectors.
Lacny, Sarah; Zarrabi, Mahmood; Martin-Misener, Ruth; Donald, Faith; Sketris, Ingrid; Murphy, Andrea L; DiCenso, Alba; Marshall, Deborah A
2016-09-01
To examine the cost-effectiveness of a nurse practitioner-family physician model of care compared with family physician-only care in a Canadian nursing home. As demand for long-term care increases, alternative care models including nurse practitioners are being explored. Cost-effectiveness analysis using a controlled before-after design. The study included an 18-month 'before' period (2005-2006) and a 21-month 'after' time period (2007-2009). Data were abstracted from charts from 2008-2010. We calculated incremental cost-effectiveness ratios comparing the intervention (nurse practitioner-family physician model; n = 45) to internal (n = 65), external (n = 70) and combined internal/external family physician-only control groups, measured as the change in healthcare costs divided by the change in emergency department transfers/person-month. We assessed joint uncertainty around costs and effects using non-parametric bootstrapping and cost-effectiveness acceptability curves. Point estimates of the incremental cost-effectiveness ratio demonstrated the nurse practitioner-family physician model dominated the internal and combined control groups (i.e. was associated with smaller increases in costs and emergency department transfers/person-month). Compared with the external control, the intervention resulted in a smaller increase in costs and larger increase in emergency department transfers. Using a willingness-to-pay threshold of $1000 CAD/emergency department transfer, the probability the intervention was cost-effective compared with the internal, external and combined control groups was 26%, 21% and 25%. Due to uncertainty around the distribution of costs and effects, we were unable to make a definitive conclusion regarding the cost-effectiveness of the nurse practitioner-family physician model; however, these results suggest benefits that could be confirmed in a larger study. © 2016 John Wiley & Sons Ltd.
Hrifach, Abdelbaste; Brault, Coralie; Couray-Targe, Sandrine; Badet, Lionel; Guerre, Pascale; Ganne, Christell; Serrier, Hassan; Labeye, Vanessa; Farge, Pierre; Colin, Cyrille
2016-12-01
The costing method used can change the results of economic evaluations. Choosing the appropriate method to assess the cost of organ recovery is an issue of considerable interest to health economists, hospitals, financial managers and policy makers in most developed countries. The main objective of this study was to compare a mixed method, combining top-down microcosting and bottom-up microcosting versus full top-down microcosting to assess the cost of organ recovery in a French hospital group. The secondary objective was to describe the cost of kidney, liver and pancreas recovery from French databases using the mixed method. The resources consumed for each donor were identified and valued using the proposed mixed method and compared to the full top-down microcosting approach. Data on kidney, liver and pancreas recovery were collected from a medico-administrative French database for the years 2010 and 2011. Related cost data were recovered from the hospital cost accounting system database for 2010 and 2011. Statistical significance was evaluated at P < 0.05. All the median costs for organ recovery differ significantly between the two costing methods (non-parametric test method; P < 0.01). Using the mixed method, the median cost for recovering kidneys was found to be €5155, liver recovery was €2528 and pancreas recovery was €1911. Using the full top-down microcosting method, median costs were found to be 21-36% lower than with the mixed method. The mixed method proposed appears to be a trade-off between feasibility and accuracy for the identification and valuation of cost components when calculating the cost of organ recovery in comparison to the full top-down microcosting approach.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Duncan, N; Roberson, C; Lail, A; Donfield, S; Shapiro, A
2014-07-01
The high cost of clotting factor concentrate (CFC) used to treat haemophilia and von Willebrand disease (VWD) attracts health plans' attention for cost management strategies such as disease management programmes (DMPs). In 2004, Indiana's high risk insurance health plan, the Indiana Comprehensive Health Insurance Association, in partnership with the Indiana Hemophilia and Thrombosis Center developed and implemented a DMP for beneficiaries with bleeding disorders. This report evaluates the effectiveness of the DMP 5 years post implementation, with specific emphasis on the cost of CFC and other medical expenditures by severity of disease. A pre/post analysis was used. The main evaluation measures were total cost, total outpatient CFC IU dispensed and adjusted total outpatient CFC cost. Summary statistics and mean and median plots were calculated. Overall, 1000 non-parametric bootstrap replicates were created and percentile confidence limits for 95% confidence intervals (CI) are reported. Mean emergency department (ED) visits and mean and median duration of hospitalizations are also reported. The DMP was associated with a significant decrease in mean annualized total cost including decreased CFC utilization and cost in most years in the overall group, and specifically in patients with severe haemophilia. Patients with mild and moderate haemophilia contributed little to overall programme expenditures. This specialty health care provider-administered DMP exemplifies the success of targeted interventions developed and implemented through a health care facility expert in the disease state to curb the cost of specialty pharmaceuticals in conditions when their expenditures represent a significant portion of total annual costs of care. © 2014 John Wiley & Sons Ltd.
Englesbe, Michael J; Grenda, Dane R; Sullivan, June A; Derstine, Brian A; Kenney, Brooke N; Sheetz, Kyle H; Palazzolo, William C; Wang, Nicholas C; Goulson, Rebecca L; Lee, Jay S; Wang, Stewart C
2017-06-01
The Michigan Surgical Home and Optimization Program is a structured, home-based, preoperative training program targeting physical, nutritional, and psychological guidance. The purpose of this study was to determine if participation in this program was associated with reduced hospital duration of stay and health care costs. We conducted a retrospective, single center, cohort study evaluating patients who participated in the Michigan Surgical Home and Optimization Program and subsequently underwent major elective general and thoracic operative care between June 2014 and December 2015. Propensity score matching was used to match program participants to a control group who underwent operative care prior to program implementation. Primary outcome measures were hospital duration of stay and payer costs. Multivariate regression was used to determine the covariate-adjusted effect of program participation. A total of 641 patients participated in the program; 82% were actively engaged in the program, recording physical activity at least 3 times per week for the majority of the program; 182 patients were propensity matched to patients who underwent operative care prior to program implementation. Multivariate analysis demonstrated that participation in the Michigan Surgical Home and Optimization Program was associated with a 31% reduction in hospital duration of stay (P < .001) and 28% lower total costs (P < .001) after adjusting for covariates. A home-based, preoperative training program decreased hospital duration of stay, lowered costs of care, and was well accepted by patients. Further efforts will focus on broader implementation and linking participation to postoperative complications and rigorous patient-reported outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysis Of Navy Hornet Squadron Mishap Costs With Regard To Previously Flown Flight Hours
2017-06-01
mishaps occur more frequently in a squadron when flight hours are reduced. This thesis correlates F/A-18 Hornet and Super Hornet squadron previously... correlated to the flight hours flown during the previous three and six months. A linear multivariate model was developed and used to analyze a dataset...hours are reduced. This thesis correlates F/A-18 Hornet and Super Hornet squadron previously flown flight hours with mishap costs. It uses a macro
Shen, Yu-Chu; Melnick, Glenn
2004-01-01
We conducted multivariate analyses to examine whether high health maintenance organization (HMO) penetration and large share of for-profit health plans in a market reduced hospital cost and revenue growth rates between 1989 and 1998. We found that hospitals in high HMO areas experienced revenue and cost growth rates that were 21 and 18 percentage points, respectively, below hospitals in low HMO areas. We also found that, conditional on overall HMO penetration level, hospitals in areas with high for-profit HMO penetration experienced revenue and cost growth rates that were 10 percentage points below hospitals in areas with low for-profit penetration areas; the difference was especially evident within high HMO penetration areas.
Hyperbolic and semi-parametric models in finance
NASA Astrophysics Data System (ADS)
Bingham, N. H.; Kiesel, Rüdiger
2001-02-01
The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.
The BODECOST Index (BCI): a composite index for assessing the impact of COPD in real life.
Dal Negro, Roberto W; Celli, Bartolome R
2016-01-01
Chronic Obstructive Pulmonary Disease (COPD) is a progressive condition which is characterized by a dramatic socio-economic impact. Several indices were extensively investigated in order to asses the mortality risk in COPD, but the utilization of health care resources was never included in calculations. The aim of this study was to assess the predictive value of annual cost of care on COPD mortality at three years, and to develop a comprehensive index for easy calculation of mortality risk in real life. COPD patients were anonymously and automatically selected from the local institutional Data Base. Selection criteria were: COPD diagnosis; both genders; age ≥ 40 years; availability of at least one complete clinical record/year, including history; clinical signs; complete lung function, therapeutic strategy, health BODE index; Charlson Comorbidity Index, and outcomes, collected at the first visit, and over the following 3-years. At the first visit, the health annual cost of care was calculated in each patient for the previous 12 months, and the survival rate was also measured over the following 3 years. The hospitalization and the exacerbation rate were implemented to the BODE index and the novel index thus obtained was called BODECOST index (BCI), ranging from 0 to 10 points. The mean cost for each BCI step was calculated and then compared to the corresponding patients' survival duration. Parametrical, non parametrical tests, and linear regression were used; p < 0.05 was accepted as the lower limit of significance. At the first visit, the selected 275 patients were well matched for all variables by gender. The overall mortality over the 3 year survey was 40.4 % (n = 111/275). When compared to that of BODE index (r = 0.22), the total annual cost of care and the number of exacerbations showed the highest regression value vs the survival time (r = 0.58 and r = 0.44, respectively). BCI score proved strictly proportional to both the cost of care and the survival time in our sample of COPD patients. BCI takes origin from the implementation of the BODE index with the two main components of the annual cost of care, such as the number of hospitalizations and of exacerbations occurring yearly in COPD patients, and their corresponding economic impact. In other words, higher the BCI score, shorter the survival and higher the cost, these trends being strictly linked. BCI is a novel composite index which helps in predicting the impact of COPD at 3 years in real life, both in terms of patients' survival and of COPD economic burden.
Nguyen, Kim Thuy; Khuat, Oanh Thi Hai; Pham, Duc Cuong; Khuat, Giang Thi Hong
2012-01-01
We applied an alternative conceptual framework for analyzing health insurance and financial protection grounded in the health capability paradigm. Through an original survey of 706 households in Dai Dong, Vietnam, we examined the impact of Vietnamese health insurance schemes on inpatient and outpatient health care access, costs, and health outcomes using bivariate and multivariable regression analyses. Insured respondents had lower outpatient and inpatient treatment costs and longer hospital stays but fewer days of missed work or school than the uninsured. Insurance reform reduced household vulnerability to high health care costs through direct reduction of medical costs and indirect reduction of income lost to illness. However, from a normative perspective, out-of-pocket costs are still too high, and accessibility issues persist; a comprehensive insurance package and additional health system reforms are needed. PMID:22698046
Yaldo, Avin; Seal, Brian S; Lage, Maureen J
2014-08-01
Examine the incremental impact of absenteeism and short-term disability associated with colorectal cancer (CRC). Absenteeism and short-term disability data were used for a case-control analysis of a healthy cohort (controls) compared with CRC patients (cases). Cases were matched to controls on the basis of age, sex, and region of residence. Multivariate regression models examined the costs of absenteeism and short-term disability, controlling for patient characteristics, prior medical costs, and patient general health. Compared with controls, CRC patients experience significantly higher short-term disability costs (mean, $45,716 vs $7367 [P < 0.0001]; median, $35,827 vs $7365 [P < 0.0001]), as well as significantly higher absenteeism costs (mean, $8841 vs $4596 [P < 0.0001]; median, $9971 vs $4795 [P < 0.0001]) in the 1 year after diagnosis of CRC. Colorectal cancer is associated with significant work-related productivity loss costs in the first year after diagnosis.
Cost-effectiveness of prevention strategies for American tegumentary leishmaniasis in Argentina.
Orellano, Pablo Wenceslao; Vazquez, Nestor; Salomon, Oscar Daniel
2013-12-01
The aim of this study was to estimate the cost-effectiveness of reducing tegumentary leishmaniasis transmission using insecticide-impregnated clothing and curtains, and implementing training programs for early diagnosis. A societal perspective was adopted, with outcomes assessed in terms of costs per disability adjusted life years (DALY). Simulation was structured as a Markov model and costs were expressed in American dollars (US$). The incremental cost-effectiveness ratio of each strategy was calculated. One-way and multivariate sensitivity analyses were performed. The incremental cost-effectiveness ratio for early diagnosis strategy was estimated at US$ 156.46 per DALY averted, while that of prevention of transmission with insecticide-impregnated curtains and clothing was US$ 13,155.52 per DALY averted. Both strategies were more sensitive to the natural incidence of leishmaniasis, to the effectiveness of mucocutaneous leishmaniasis treatment and to the cost of each strategy. Prevention of vectorial transmission and early diagnosis have proved to be cost-effective measures.
Constraining geostatistical models with hydrological data to improve prediction realism
NASA Astrophysics Data System (ADS)
Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.
2012-04-01
Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.