Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
A study is in-process to develop a multivariable parametric cost model for space telescopes. Cost and engineering parametric data has been collected on 30 different space telescopes. Statistical correlations have been developed between 19 variables of 59 variables sampled. Single Variable and Multi-Variable Cost Estimating Relationships have been developed. Results are being published.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Update on Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
X-1 to X-Wings: Developing a Parametric Cost Model
NASA Technical Reports Server (NTRS)
Sterk, Steve; McAtee, Aaron
2015-01-01
In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.
Multivariable Parametric Cost Model for Ground Optical Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2005-01-01
A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, L.T.; Hickey, M.
This paper summarizes the progress to date by CH2M HILL and the UKAEA in development of a parametric modelling capability for estimating the costs of large nuclear decommissioning projects in the United Kingdom (UK) and Europe. The ability to successfully apply parametric cost estimating techniques will be a key factor to commercial success in the UK and European multi-billion dollar waste management, decommissioning and environmental restoration markets. The most useful parametric models will be those that incorporate individual components representing major elements of work: reactor decommissioning, fuel cycle facility decommissioning, waste management facility decommissioning and environmental restoration. Models must bemore » sufficiently robust to estimate indirect costs and overheads, permit pricing analysis and adjustment, and accommodate the intricacies of international monetary exchange, currency fluctuations and contingency. The development of a parametric cost estimating capability is also a key component in building a forward estimating strategy. The forward estimating strategy will enable the preparation of accurate and cost-effective out-year estimates, even when work scope is poorly defined or as yet indeterminate. Preparation of cost estimates for work outside the organizations current sites, for which detailed measurement is not possible and historical cost data does not exist, will also be facilitated. (authors)« less
Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
Cost model validation: a technical and cultural approach
NASA Technical Reports Server (NTRS)
Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.
2001-01-01
This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.
Constellation Program Life-cycle Cost Analysis Model (LCAM)
NASA Technical Reports Server (NTRS)
Prince, Andy; Rose, Heidi; Wood, James
2008-01-01
The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Cost Modeling for low-cost planetary missions
NASA Technical Reports Server (NTRS)
Kwan, Eric; Habib-Agahi, Hamid; Rosenberg, Leigh
2005-01-01
This presentation will provide an overview of the JPL parametric cost models used to estimate flight science spacecrafts and instruments. This material will emphasize the cost model approaches to estimate low-cost flight hardware, sensors, and instrumentation, and to perform cost-risk assessments. This presentation will also discuss JPL approaches to perform cost modeling and the methodologies and analyses used to capture low-cost vs. key cost drivers.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Preliminary Multivariable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
Preliminary Multi-Variable Parametric Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
A Parametric Regression of the Cost of Base Realignment Action (COBRA) Model
1993-09-20
Douglas D. Hardman , Captain, USAF Michael S. Nelson, Captain, USAF AFIT/GEE/ENS/93S-03 93 P’ 8 143 Approved for public release, distribution unlimited 93... Hardman CLASS: GEE 93S Captain Michael Nelson TITLE: A Parametric Regression of the Cost of Base Realignment Action (COBRA) Model DEFENSE DATE: 20...Science in Engineering and Environmental Management Douglas D. Hardman , B.S.E.E. Michael S. Nelson, B.S.C.E Captain, USAF Captain, USAF September 1993
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
NASA Astrophysics Data System (ADS)
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Cost Modeling for Space Telescope
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Cost Estimation of Naval Ship Acquisition.
1983-12-01
one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear
Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2012-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.
Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd
2016-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
Multivariable parametric cost model for space and ground telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; Henrichs, Todd
2016-09-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost (X) D (1.75 +/- 0.05) λ (-0.5 +/- 0.25) T-0.25 e (-0.04) Y Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
Preliminary Multi-Variable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
Cost Modeling for Space Optical Telescope Assemblies
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Preliminary Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd
2009-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.
Weight and the Future of Space Flight Hardware Cost Modeling
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2003-01-01
Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
1994-09-01
Institute of Technology, Wright- Patterson AFB OH, January 1994. 4. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 5...Technology, Wright-Patterson AFB OH 5 April 1994. 29. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 30. Office of
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
NASA/Air Force Cost Model: NAFCOM
NASA Technical Reports Server (NTRS)
Winn, Sharon D.; Hamcher, John W. (Technical Monitor)
2002-01-01
The NASA/Air Force Cost Model (NAFCOM) is a parametric estimating tool for space hardware. It is based on historical NASA and Air Force space projects and is primarily used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels.
Parametric versus Cox's model: an illustrative analysis of divorce in Canada.
Balakrishnan, T R; Rao, K V; Krotki, K J; Lapierre-adamcyk, E
1988-06-01
Recent demographic literature clearly recognizes the importance of survival modes in the analysis of cross-sectional event histories. Of the various survival models, Cox's (1972) partial parametric model has been very popular due to its simplicity, and readily available computer software for estimation, sometimes at the cost of precision and parsimony of the model. This paper focuses on parametric failure time models for event history analysis such as Weibell, lognormal, loglogistic, and exponential models. The authors also test the goodness of fit of these parametric models versus the Cox's proportional hazards model taking Kaplan-Meier estimate as base. As an illustration, the authors reanalyze the Canadian Fertility Survey data on 1st marriage dissolution with parametric models. Though these parametric model estimates were not very different from each other, there seemed to be a slightly better fit with loglogistic. When 8 covariates were used in the analysis, it was found that the coefficients were similar in the models, and the overall conclusions about the relative risks would not have been different. The findings reveal that in marriage dissolution, the differences according to demographic and socioeconomic characteristics may be far more important than is generally found in many studies. Therefore, one should not treat the population as homogeneous in analyzing survival probabilities of marriages, other than for cursory analysis of overall trends.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Nadal-Serrano, Jose M; Nadal-Serrano, Adolfo; Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts.
Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts. PMID:29112987
Predicting Production Costs for Advanced Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.; Weston, R. P.
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This paper outlines the development of a process-based cost model in which the physical elements of the vehicle are soared according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this paper is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool.
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Parametric Cost and Schedule Modeling for Early Technology Development
2018-04-02
Best Paper in the Analysis Methods Category and 2017 Best Paper Overall awards. It was also presented at the 2017 NASA Cost and Schedule Symposium... Methods over the Project Life Cycle .............................................................................................. 2 Figure 2. Average...information contribute to the lack of data, objective models, and methods that can be broadly applied in early planning stages. Scientific
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
Gao, Lan; Hu, Hao; Zhao, Fei-Li; Li, Shu-Chuen
2016-01-01
Objectives To systematically review cost of illness studies for schizophrenia (SC), epilepsy (EP) and type 2 diabetes mellitus (T2DM) and explore the transferability of direct medical cost across countries. Methods A comprehensive literature search was performed to yield studies that estimated direct medical costs. A generalized linear model (GLM) with gamma distribution and log link was utilized to explore the variation in costs that accounted by the included factors. Both parametric (Random-effects model) and non-parametric (Boot-strapping) meta-analyses were performed to pool the converted raw cost data (expressed as percentage of GDP/capita of the country where the study was conducted). Results In total, 93 articles were included (40 studies were for T2DM, 34 studies for EP and 19 studies for SC). Significant variances were detected inter- and intra-disease classes for the direct medical costs. Multivariate analysis identified that GDP/capita (p<0.05) was a significant factor contributing to the large variance in the cost results. Bootstrapping meta-analysis generated more conservative estimations with slightly wider 95% confidence intervals (CI) than the parametric meta-analysis, yielding a mean (95%CI) of 16.43% (11.32, 21.54) for T2DM, 36.17% (22.34, 50.00) for SC and 10.49% (7.86, 13.41) for EP. Conclusions Converting the raw cost data into percentage of GDP/capita of individual country was demonstrated to be a feasible approach to transfer the direct medical cost across countries. The approach from our study to obtain an estimated direct cost value along with the size of specific disease population from each jurisdiction could be used for a quick check on the economic burden of particular disease for countries without such data. PMID:26814959
Manned Mars mission cost estimate
NASA Technical Reports Server (NTRS)
Hamaker, Joseph; Smith, Keith
1986-01-01
The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
NASA Air Force Cost Model (NAFCOM): Capabilities and Results
NASA Technical Reports Server (NTRS)
McAfee, Julie; Culver, George; Naderi, Mahmoud
2011-01-01
NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
NASA's X-Plane Database and Parametric Cost Model v 2.0
NASA Technical Reports Server (NTRS)
Sterk, Steve; Ogluin, Anthony; Greenberg, Marc
2016-01-01
The NASA Armstrong Cost Engineering Team with technical assistance from NASA HQ (SID)has gone through the full process in developing new CERs from Version #1 to Version #2 CERs. We took a step backward and reexamined all of the data collected, such as dependent and independent variables, cost, dry weight, length, wingspan, manned versus unmanned, altitude, Mach number, thrust, and skin. We used a well- known statistical analysis tool called CO$TAT instead of using "R" multiple linear or the "Regression" tool found in Microsoft Excel(TradeMark). We setup an "array of data" by adding 21" dummy variables;" we analyzed the standard error (SE) and then determined the "best fit." We have parametrically priced-out several future X-planes and compared our results to those of other resources. More work needs to be done in getting "accurate and traceable cost data" from historical X-plane records!
NASA Technical Reports Server (NTRS)
1979-01-01
Cost data generated for the evolutionary power module concepts selected are reported. The initial acquisition costs (design, development, and protoflight unit test costs) were defined and modeled for the baseline 25 kW power module configurations. By building a parametric model of this initial building block, the cost of the 50 kW and the 100 kW power modules were derived by defining only their configuration and programmatic differences from the 25 kW baseline module. Variations in cost for the quantities needed to fulfill the mission scenarios were derived by applying appropriate learning curves.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
NASA Technical Reports Server (NTRS)
Leininger, G.; Jutila, S.; King, J.; Muraco, W.; Hansell, J.; Lindeen, J.; Franckowiak, E.; Flaschner, A.
1975-01-01
Appendices are presented which include discussions of interest formulas, factors in regionalization, parametric modeling of discounted benefit-sacrifice streams, engineering economic calculations, and product innovation. For Volume 1, see .
Estimating the Life Cycle Cost of Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
A space system's Life Cycle Cost (LCC) includes design and development, launch and emplacement, and operations and maintenance. Each of these cost factors is usually estimated separately. NASA uses three different parametric models for the design and development cost of crewed space systems; the commercial PRICE-H space hardware cost model, the NASA-Air Force Cost Model (NAFCOM), and the Advanced Missions Cost Model (AMCM). System mass is an important parameter in all three models. System mass also determines the launch and emplacement cost, which directly depends on the cost per kilogram to launch mass to Low Earth Orbit (LEO). The launch and emplacement cost is the cost to launch to LEO the system itself and also the rockets, propellant, and lander needed to emplace it. The ratio of the total launch mass to payload mass depends on the mission scenario and destination. The operations and maintenance costs include any material and spares provided, the ground control crew, and sustaining engineering. The Mission Operations Cost Model (MOCM) estimates these costs as a percentage of the system development cost per year.
Thin-Film Photovoltaic Solar Array Parametric Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Kerslake, Thomas W.; Hepp, Aloysius F.; Jacobs, Mark K.; Ponnusamy, Deva
2000-01-01
This paper summarizes a study that had the objective to develop a model and parametrically determine the circumstances for which lightweight thin-film photovoltaic solar arrays would be more beneficial, in terms of mass and cost, than arrays using high-efficiency crystalline solar cells. Previous studies considering arrays with near-term thin-film technology for Earth orbiting applications are briefly reviewed. The present study uses a parametric approach that evaluated the performance of lightweight thin-film arrays with cell efficiencies ranging from 5 to 20 percent. The model developed for this study is described in some detail. Similar mass and cost trends for each array option were found across eight missions of various power levels in locations ranging from Venus to Jupiter. The results for one specific mission, a main belt asteroid tour, indicate that only moderate thin-film cell efficiency (approx. 12 percent) is necessary to match the mass of arrays using crystalline cells with much greater efficiency (35 percent multi-junction GaAs based and 20 percent thin-silicon). Regarding cost, a 12 percent efficient thin-film array is projected to cost about half is much as a 4-junction GaAs array. While efficiency improvements beyond 12 percent did not significantly further improve the mass and cost benefits for thin-film arrays, higher efficiency will be needed to mitigate the spacecraft-level impacts associated with large deployed array areas. A low-temperature approach to depositing thin-film cells on lightweight, flexible plastic substrates is briefly described. The paper concludes with the observation that with the characteristics assumed for this study, ultra-lightweight arrays using efficient, thin-film cells on flexible substrates may become a leading alternative for a wide variety of space missions.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Gas engine heat pump cycle analysis. Volume 1: Model description and generic analysis
NASA Astrophysics Data System (ADS)
Fischer, R. D.
1986-10-01
The task has prepared performance and cost information to assist in evaluating the selection of high voltage alternating current components, values for component design variables, and system configurations and operating strategy. A steady-state computer model for performance simulation of engine-driven and electrically driven heat pumps was prepared and effectively used for parametric and seasonal performance analyses. Parametric analysis showed the effect of variables associated with design of recuperators, brine coils, domestic hot water heat exchanger, compressor size, engine efficiency, insulation on exhaust and brine piping. Seasonal performance data were prepared for residential and commercial units in six cities with system configurations closely related to existing or contemplated hardware of the five GRI engine contractors. Similar data were prepared for an advanced variable-speed electric unit for comparison purposes. The effect of domestic hot water production on operating costs was determined. Four fan-operating strategies and two brine loop configurations were explored.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Cost Estimation and Control for Flight Systems
NASA Technical Reports Server (NTRS)
Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)
2002-01-01
Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Latest NASA Instrument Cost Model (NICM): Version VI
NASA Technical Reports Server (NTRS)
Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary
2014-01-01
The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.
Le, Quang A; Bae, Yuna H; Kang, Jenny H
2016-10-01
The EMILIA trial demonstrated that trastuzumab emtansine (T-DM1) significantly increased the median profession-free and overall survival relative to combination therapy with lapatinib plus capecitabine (LC) in patients with HER2-positive advanced breast cancer (ABC) previously treated with trastuzumab and a taxane. We performed an economic analysis of T-DM1 as a second-line therapy compared to LC and monotherapy with capecitabine (C) from both perspectives of the US payer and society. We developed four possible Markov models for ABC to compare the projected life-time costs and outcomes of T-DM1, LC, and C. Model transition probabilities were estimated from the EMILIA and EGF100151 clinical trials. Direct costs of the therapies, major adverse events, laboratory tests, and disease progression, indirect costs (productivity losses due to morbidity and mortality), and health utilities were obtained from published sources. The models used 3 % discount rate and reported in 2015 US dollars. Probabilistic sensitivity analysis and model averaging were used to account for model parametric and structural uncertainty. When incorporating both model parametric and structural uncertainty, the resulting incremental cost-effectiveness ratios (ICER) comparing T-DM1 to LC and T-DM1 to C were $183,828 per quality-adjusted life year (QALY) and $126,001/QALY from the societal perspective, respectively. From the payer's perspective, the ICERs were $220,385/QALY (T-DM1 vs. LC) and $168,355/QALY (T-DM1 vs. C). From both perspectives of the US payer and society, T-DM1 is not cost-effective when comparing to the LC combination therapy at a willingness-to-pay threshold of $150,000/QALY. T-DM1 might have a better chance to be cost-effective compared to capecitabine monotherapy from the US societal perspective.
NASA Technical Reports Server (NTRS)
Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.
1972-01-01
A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.
Process Cost Modeling for Multi-Disciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Bao, Han P.; Freeman, William (Technical Monitor)
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to highlight their inappropriateness for what is really needed at the conceptual phase of the design process. The First-Order Process Velocity Cost Model (FOPV) is discussed at length in the next section. This is followed by an application of the FOPV cost model to a generic wing. For designs that have no precedence as far as acquisition costs are concerned, cost data derived from the FOPV cost model may not be accurate enough because of new requirements for shape complexity, material, equipment and precision/tolerance. The concept of Cost Modulus is introduced at this point to compensate for these new burdens on the basic processes. This is treated in section 5. The cost of a design must be conveniently linked to its CAD representation. The interfacing of CAD models and spreadsheets containing the cost equations is the subject of the next section, section 6. The last section of the report is a summary of the progress made so far, and the anticipated research work to be achieved in the future.
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2017-01-01
Building a parametric cost model is hard work. The data is noisy and often does not behave like we want it to. We need statistics to give us an indication of the goodness of our models, but; statistics can be manipulated and mislead. On top of all of that, our own very human biases can lead us astray; causing us to see patterns in the noise and draw false conclusions from the data. Yet, it is the data itself that is the foundation for making better cost estimates and cost models. I believe the mistake we often make is we believe that our models are representative of the data; that our models summarize the experiences, the knowledge, and the stories contained in the data. However, it is the opposite that is true. Our models are but imitations of reality. They give us trends, but not truth. The experiences, the knowledge, and the stories that we need in order to make good cost estimates is bound up in the data. You cannot separate good cost estimating from a knowledge of the historical data. One final thought. It is our attempts to make sense out of the randomness that leads us astray. In order to make progress as cost modelers and cost estimators, we must accept that there are real limitations on our ability to model the past and predict the future. I do not believe we should throw up our hands and say this is the best we can do. Rather, to see real improvement we must first recognize these limitations, avoid the easy but misleading solutions, and seek to find ways to better model the world we live in. I don't have any simple solutions. Perhaps the answers lie in better data or in a totally different approach to simulating how the world works. All I know is that we must do our best to speak truth to ourselves and our customers. Misleading ourselves and our customers will, in the end, result in an inability to have a positive impact on those we serve.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
Experiment in multiple-criteria energy policy analysis
NASA Astrophysics Data System (ADS)
Ho, J. K.
1980-07-01
An international panel of energy analysts participated in an experiment to use HOPE (holistic preference evaluation): an interactive parametric linear programming method for multiple criteria optimization. The criteria of cost, environmental effect, crude oil, and nuclear fuel were considered, according to BESOM: an energy model for the US in the year 2000.
DOT National Transportation Integrated Search
1975-03-01
parametric variation of demand density was used to compare service level and cost of two alternative systems for providing low density feeder service. Supply models for fixed route and flexible route service were developed and applied to determine ra...
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
The report gives results of a series of computer runs using the DOE-2.1E building energy model, simulating a small office in a hot, humid climate (Miami). These simulations assessed the energy and relative humidity (RH) penalties when the outdoor air (OA) ventilation rate is inc...
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Commercial launch systems: A risky investment?
NASA Astrophysics Data System (ADS)
Dupnick, Edwin; Skratt, John
1996-03-01
A myriad of evolutionary paths connect the current state of government-dominated space launch operations to true commercial access to space. Every potential path requires the investment of private capital sufficient to fund the commercial venture with a perceived risk/return ratio acceptable to the investors. What is the private sector willing to invest? Does government participation reduce financial risk? How viable is a commercial launch system without government participation and support? We examine the interplay between various forms of government participation in commercial launch system development, alternative launch system designs, life cycle cost estimates, and typical industry risk aversion levels. The boundaries of this n-dimensional envelope are examined with an ECON-developed business financial model which provides for the parametric assessment and interaction of SSTO design variables (including various operational scenarios with financial variables including debt/equity assumptions, and commercial enterprise burden rates on various functions. We overlay this structure with observations from previous ECON research which characterize financial risk aversion levels for selected industrial sectors in terms of acceptable initial lump-sum investments, cumulative investments, probability of failure, payback periods, and ROI. The financial model allows the construction of parametric tradeoffs based on ranges of variables which can be said to actually encompass the ``true'' cost of operations and determine what level of ``true'' costs can be tolerated by private capitalization.
Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A
2017-09-30
For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Economic analysis and assessment of syngas production using a modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hakkwan; Parajuli, Prem B.; Yu, Fei
Economic analysis and modeling are essential and important issues for the development of current feedstock and process technology for bio-gasification. The objective of this study was to develop an economic model and apply to predict the unit cost of syngas production from a micro-scale bio-gasification facility. An economic model was programmed in C++ computer programming language and developed using a parametric cost approach, which included processes to calculate the total capital costs and the total operating costs. The model used measured economic data from the bio-gasification facility at Mississippi State University. The modeling results showed that the unit cost ofmore » syngas production was $1.217 for a 60 Nm-3 h-1 capacity bio-gasifier. The operating cost was the major part of the total production cost. The equipment purchase cost and the labor cost were the largest part of the total capital cost and the total operating cost, respectively. Sensitivity analysis indicated that labor costs rank the top as followed by equipment cost, loan life, feedstock cost, interest rate, utility cost, and waste treatment cost. The unit cost of syngas production increased with the increase of all parameters with exception of loan life. The annual cost regarding equipment, labor, feedstock, waste treatment, and utility cost showed a linear relationship with percent changes, while loan life and annual interest rate showed a non-linear relationship. This study provides the useful information for economic analysis and assessment of the syngas production using a modeling approach.« less
Parametric Analysis of Light Truck and Automobile Maintenance
DOT National Transportation Integrated Search
1979-05-01
Utilizing the Automotive and Light Truck Service and Repair Data Base developed in the campanion report, parametric analyses were made of the relationships between maintenance costs, schduled and unschduled, and vehicle parameters; body class, manufa...
Parametric analysis of closed cycle magnetohydrodynamic (MHD) power plants
NASA Technical Reports Server (NTRS)
Owens, W.; Berg, R.; Murthy, R.; Patten, J.
1981-01-01
A parametric analysis of closed cycle MHD power plants was performed which studied the technical feasibility, associated capital cost, and cost of electricity for the direct combustion of coal or coal derived fuel. Three reference plants, differing primarily in the method of coal conversion utilized, were defined. Reference Plant 1 used direct coal fired combustion while Reference Plants 2 and 3 employed on site integrated gasifiers. Reference Plant 2 used a pressurized gasifier while Reference Plant 3 used a ""state of the art' atmospheric gasifier. Thirty plant configurations were considered by using parametric variations from the Reference Plants. Parametric variations include the type of coal (Montana Rosebud or Illinois No. 6), clean up systems (hot or cold gas clean up), on or two stage atmospheric or pressurized direct fired coal combustors, and six different gasifier systems. Plant sizes ranged from 100 to 1000 MWe. Overall plant performance was calculated using two methodologies. In one task, the channel performance was assumed and the MHD topping cycle efficiencies were based on the assumed values. A second task involved rigorous calculations of channel performance (enthalpy extraction, isentropic efficiency and generator output) that verified the original (task one) assumptions. Closed cycle MHD capital costs were estimated for the task one plants; task two cost estimates were made for the channel and magnet only.
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Organizing Space Shuttle parametric data for maintainability
NASA Technical Reports Server (NTRS)
Angier, R. C.
1983-01-01
A model of organization and management of Space Shuttle data is proposed. Shuttle avionics software is parametrically altered by a reconfiguration process for each flight. As the flight rate approaches an operational level, current methods of data management would become increasingly complex. An alternative method is introduced, using modularized standard data, and its implications for data collection, integration, validation, and reconfiguration processes are explored. Information modules are cataloged for later use, and may be combined in several levels for maintenance. For each flight, information modules can then be selected from the catalog at a high level. These concepts take advantage of the reusability of Space Shuttle information to reduce the cost of reconfiguration as flight experience increases.
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.
Parametric evaluation of the cost effectiveness of Shuttle payload vibroacoustic test plans
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Keegan, W. B.; Young, J. P.
1978-01-01
Consideration is given to alternate vibroacoustic test plans for sortie and free flyer Shuttle payloads. Statistical decision models for nine test plans provide a viable method of evaluating the cost effectiveness of alternate vibroacoustic test plans and the associated test levels. The methodology is a major step toward the development of a useful tool for the quantitative tailoring of vibroacoustic test programs to sortie and free flyer payloads. A broader application of the methodology is now possible by the use of the OCTAVE computer code.
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
NASA Technical Reports Server (NTRS)
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Hyperbolic and semi-parametric models in finance
NASA Astrophysics Data System (ADS)
Bingham, N. H.; Kiesel, Rüdiger
2001-02-01
The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.
Parametric study of different contributors to tumor thermal profile
NASA Astrophysics Data System (ADS)
Tepper, Michal; Gannot, Israel
2014-03-01
Treating cancer is one of the major challenges of modern medicine. There is great interest in assessing tumor development in in vivo animal and human models, as well as in in vitro experiments. Existing methods are either limited by cost and availability or by their low accuracy and reproducibility. Thermography holds the potential of being a noninvasive, low-cost, irradiative and easy-to-use method for tumor monitoring. Tumors can be detected in thermal images due to their relatively higher or lower temperature compared to the temperature of the healthy skin surrounding them. Extensive research is performed to show the validity of thermography as an efficient method for tumor detection and the possibility of extracting tumor properties from thermal images, showing promising results. However, deducing from one type of experiment to others is difficult due to the differences in tumor properties, especially between different types of tumors or different species. There is a need in a research linking different types of tumor experiments. In this research, parametric analysis of possible contributors to tumor thermal profiles was performed. The effect of tumor geometric, physical and thermal properties was studied, both independently and together, in phantom model experiments and computer simulations. Theoretical and experimental results were cross-correlated to validate the models used and increase the accuracy of simulated complex tumor models. The contribution of different parameters in various tumor scenarios was estimated and the implication of these differences on the observed thermal profiles was studied. The correlation between animal and human models is discussed.
Team X Report #1401: Exoplanet Coronagraph STDT Study 2013-06
NASA Technical Reports Server (NTRS)
Warfield, Keith
2013-01-01
This document is intended to stimulate discussion of the topic described. All technical and cost analyses are preliminary. This document is not a commitment to work, but is a precursor to a formal proposal if it generates sufficient mutual interest. The data contained in this document may not be modified in any way. Cost estimates described or summarized in this document were generated as part of a preliminary, first-order cost class identification as part of an early trade space study, are based on JPL-internal parametric cost modeling, assume a JPL in-house build, and do not constitute a commitment on the part of JPL or Caltech. JPL and Team X add cost reserves for development and operations. Unadjusted estimate totals and cost reserve allocations would be revised as needed in future more-detailed studies as appropriate for the specific cost-risks for a given mission concept.
Analysis and assessment of STES technologies
NASA Astrophysics Data System (ADS)
Brown, D. R.; Blahnik, D. E.; Huber, H. D.
1982-12-01
Technical and economic assessments completed in FY 1982 in support of the Seasonal Thermal Energy Storage (STES) segment of the Underground Energy Storage Program included: (1) a detailed economic investigation of the cost of heat storage in aquifers, (2) documentation for AQUASTOR, a computer model for analyzing aquifer thermal energy storage (ATES) coupled with district heating or cooling, and (3) a technical and economic evaluation of several ice storage concepts. This paper summarizes the research efforts and main results of each of these three activities. In addition, a detailed economic investigation of the cost of chill storage in aquifers is currently in progress. The work parallels that done for ATES heat storage with technical and economic assumptions being varied in a parametric analysis of the cost of ATES delivered chill. The computer model AQUASTOR is the principal analytical tool being employed.
Advanced Technology Lifecycle Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Mankins, John C.
2004-01-01
Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is satisfied with the system configurations, technology portfolios, and deployment strategies, he or she can present the concepts to a team, which will conduct a detailed, discipline-oriented analysis within a CEE. An analog to this approach is the music industry where a songwriter creates the lyrics and music before entering a recording studio.
Optimization of space manufacturing systems
NASA Technical Reports Server (NTRS)
Akin, D. L.
1979-01-01
Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.
NASA Technical Reports Server (NTRS)
Marston, C. H.; Alyea, F. N.; Bender, D. J.; Davis, L. K.; Dellinger, T. C.; Hnat, J. G.; Komito, E. H.; Peterson, C. A.; Rogers, D. A.; Roman, A. J.
1980-01-01
The performance and cost of moderate technology coal-fired open cycle MHD/steam power plant designs which can be expected to require a shorter development time and have a lower development cost than previously considered mature OCMHD/steam plants were determined. Three base cases were considered: an indirectly-fired high temperature air heater (HTAH) subsystem delivering air at 2700 F, fired by a state of the art atmospheric pressure gasifier, and the HTAH subsystem was deleted and oxygen enrichment was used to obtain requisite MHD combustion temperature. Coal pile to bus bar efficiencies in ease case 1 ranged from 41.4% to 42.9%, and cost of electricity (COE) was highest of the three base cases. For base case 2 the efficiency range was 42.0% to 45.6%, and COE was lowest. For base case 3 the efficiency range was 42.9% to 44.4%, and COE was intermediate. The best parametric cases in bases cases 2 and 3 are recommended for conceptual design. Eventual choice between these approaches is dependent on further evaluation of the tradeoffs among HTAH development risk, O2 plant integration, and further refinements of comparative costs.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Lacny, Sarah; Zarrabi, Mahmood; Martin-Misener, Ruth; Donald, Faith; Sketris, Ingrid; Murphy, Andrea L; DiCenso, Alba; Marshall, Deborah A
2016-09-01
To examine the cost-effectiveness of a nurse practitioner-family physician model of care compared with family physician-only care in a Canadian nursing home. As demand for long-term care increases, alternative care models including nurse practitioners are being explored. Cost-effectiveness analysis using a controlled before-after design. The study included an 18-month 'before' period (2005-2006) and a 21-month 'after' time period (2007-2009). Data were abstracted from charts from 2008-2010. We calculated incremental cost-effectiveness ratios comparing the intervention (nurse practitioner-family physician model; n = 45) to internal (n = 65), external (n = 70) and combined internal/external family physician-only control groups, measured as the change in healthcare costs divided by the change in emergency department transfers/person-month. We assessed joint uncertainty around costs and effects using non-parametric bootstrapping and cost-effectiveness acceptability curves. Point estimates of the incremental cost-effectiveness ratio demonstrated the nurse practitioner-family physician model dominated the internal and combined control groups (i.e. was associated with smaller increases in costs and emergency department transfers/person-month). Compared with the external control, the intervention resulted in a smaller increase in costs and larger increase in emergency department transfers. Using a willingness-to-pay threshold of $1000 CAD/emergency department transfer, the probability the intervention was cost-effective compared with the internal, external and combined control groups was 26%, 21% and 25%. Due to uncertainty around the distribution of costs and effects, we were unable to make a definitive conclusion regarding the cost-effectiveness of the nurse practitioner-family physician model; however, these results suggest benefits that could be confirmed in a larger study. © 2016 John Wiley & Sons Ltd.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Manufacturing information system
NASA Astrophysics Data System (ADS)
Allen, D. K.; Smith, P. R.; Smart, M. J.
1983-12-01
The size and cost of manufacturing equipment has made it extremely difficult to perform realistic modeling and simulation of the manufacturing process in university research laboratories. Likewise the size and cost factors, coupled with many uncontrolled variables of the production situation has even made it difficult to perform adequate manufacturing research in the industrial setting. Only the largest companies can afford manufacturing research laboratories; research results are often held proprietary and seldom find their way into the university classroom to aid in education and training of new manufacturing engineers. It is the purpose for this research to continue the development of miniature prototype equipment suitable for use in an integrated CAD/CAM Laboratory. The equipment being developed is capable of actually performing production operations (e.g. drilling, milling, turning, punching, etc.) on metallic and non-metallic workpieces. The integrated CAD/CAM Mini-Lab is integrating high resolution, computer graphics, parametric design, parametric N/C parts programmings, CNC machine control, automated storage and retrieval, with robotics materials handling. The availability of miniature CAD/CAM laboratory equipment will provide the basis for intensive laboratory research on manufacturing information systems.
Parametric Modelling of As-Built Beam Framed Structure in Bim Environment
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.
2017-02-01
A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.
NASA Technical Reports Server (NTRS)
Wolfe, R. W.
1976-01-01
A parametric analysis was made of three types of advanced steam power plants that use coal in order to have a comparison of the cost of electricity produced by them a wide range of primary performance variables. Increasing the temperature and pressure of the steam above current industry levels resulted in increased energy costs because the cost of capital increased more than the fuel cost decreased. While the three plant types produced comparable energy cost levels, the pressurized fluidized bed boiler plant produced the lowest energy cost by the small margin of 0.69 mills/MJ (2.5 mills/kWh). It is recommended that this plant be designed in greater detail to determine its cost and performance more accurately than was possible in a broad parametric study and to ascertain problem areas which will require development effort. Also considered are pollution control measures such as scrubbers and separates for particulate emissions from stack gases.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Bulsei, Julie; Darlington, Meryl; Durand-Zaleski, Isabelle; Azizi, Michel
2018-04-01
Whilst much uncertainty exists as to the efficacy of renal denervation (RDN), the positive results of the DENERHTN study in France confirmed the interest of an economic evaluation in order to assess efficiency of RDN and inform local decision makers about the costs and benefits of this intervention. The uncertainty surrounding both the outcomes and the costs can be described using health economic methods such as the non-parametric bootstrap. Internationally, numerous health economic studies using a cost-effectiveness model to assess the impact of RDN in terms of cost and effectiveness compared to antihypertensive medical treatment have been conducted. The DENERHTN cost-effectiveness study was the first health economic evaluation specifically designed to assess the cost-effectiveness of RDN using individual data. Using the DENERHTN results as an example, we provide here a summary of the principle methods used to perform a cost-effectiveness analysis.
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Composite panel development at JPL
NASA Technical Reports Server (NTRS)
Mcelroy, Paul; Helms, Rich
1988-01-01
Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.
NASA Technical Reports Server (NTRS)
Eder, D.
1992-01-01
Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.
Climate change and vector-borne diseases: an economic impact analysis of malaria in Africa.
Egbendewe-Mondzozo, Aklesso; Musumba, Mark; McCarl, Bruce A; Wu, Ximing
2011-03-01
A semi-parametric econometric model is used to study the relationship between malaria cases and climatic factors in 25 African countries. Results show that a marginal change in temperature and precipitation levels would lead to a significant change in the number of malaria cases for most countries by the end of the century. Consistent with the existing biophysical malaria model results, the projected effects of climate change are mixed. Our model projects that some countries will see an increase in malaria cases but others will see a decrease. We estimate projected malaria inpatient and outpatient treatment costs as a proportion of annual 2000 health expenditures per 1,000 people. We found that even under minimal climate change scenario, some countries may see their inpatient treatment cost of malaria increase more than 20%.
A data-centric approach to understanding the pricing of financial options
NASA Astrophysics Data System (ADS)
Healy, J.; Dixon, M.; Read, B.; Cai, F. F.
2002-05-01
We investigate what can be learned from a purely phenomenological study of options prices without modelling assumptions. We fitted neural net (NN) models to LIFFE ``ESX'' European style FTSE 100 index options using daily data from 1992 to 1997. These non-parametric models reproduce the Black-Scholes (BS) analytic model in terms of fit and performance measures using just the usual five inputs (S, X, t, r, IV). We found that adding transaction costs (bid-ask spread) to these standard five parameters gives a comparable fit and performance. Tests show that the bid-ask spread can be a statistically significant explanatory variable for option prices. The difference in option prices between the models with transaction costs and those without ranges from about -3.0 to +1.5 index points, varying with maturity date. However, the difference depends on the moneyness (S/X), being greatest in-the-money. This suggests that use of a five-factor model can result in a pricing difference of up to #10 to #30 per call option contract compared with modelling under transaction costs. We found that the influence of transaction costs varied between different yearly subsets of the data. Open interest is also a significant explanatory variable, but volume is not.
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Research on simplified parametric finite element model of automobile frontal crash
NASA Astrophysics Data System (ADS)
Wu, Linan; Zhang, Xin; Yang, Changhai
2018-05-01
The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
The costs of transit fare prepayment programs : a parametric cost analysis.
DOT National Transportation Integrated Search
Despite the renewed interest in transit fare prepayment plans over the past : 10 years, few transit managers have a clear idea of how much it costs to operate : and maintain a fare prepayment program. This report provides transit managers : with the ...
An Affordability Comparison Tool (ACT) for Space Transportation
NASA Technical Reports Server (NTRS)
McCleskey, C. M.; Bollo, T. R.; Garcia, J. L.
2012-01-01
NASA bas recently emphasized the importance of affordability for Commercial Crew Development Program (CCDP), Space Launch Systems (SLS) and Multi-Purpose Crew Vehicle (MPCV). System architects and designers are challenged to come up with architectures and designs that do not bust the budget. This paper describes the Affordability Comparison Tool (ACT) analyzes different systems or architecture configurations for affordability that allows for a comparison of: total life cycle cost; annual recurring costs, affordability figures-of-merit, such as cost per pound, cost per seat, and cost per flight, as well as productivity measures, such as payload throughput. Although ACT is not a deterministic model, the paper develops algorithms and parametric factors that use characteristics of the architectures or systems being compared to produce important system outcomes (figures-of-merit). Example applications of outcome figures-of-merit are also documented to provide the designer with information on the relative affordability and productivity of different space transportation applications.
1989-07-31
Information System (OSMIS). The long-range objective is to develop methods to determine total operating and support (O&S) costs within life-cycle cost...objective was to assess the feasibility of developing cost estimating relationships (CERs) based on data from the Army Operating and Support Management
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.
1977-01-01
A preliminary assessment of vibroacoustic test plan optimization for free flyer STS payloads is presented and the effects on alternate test plans for Spacelab sortie payloads number of missions are also examined. The component vibration failure probability and the number of components in the housekeeping subassemblies are provided. Decision models are used to evaluate the cost effectiveness of seven alternate test plans using protoflight hardware.
Non-Parametric Model Drift Detection
2016-07-01
de,la,n,o,the,m, facto ,et,des,of Group num: 42, TC(X;Y_j): 0.083 42:resolution,council,resolutions,draft,recalling,pursuant,reaffirming,sponsors...Y_j): 0.019 80: posts ,cost, post ,expenditure,overall,infrastructure,expected,operational,external, savings Group num: 81, TC(X;Y_j): 0.018 81...90, TC(X;Y_j): 0.014 90:its,expresses,mandate,reiterates,appreciation,expressing,endorsed,reiterated, ex peditiously,literature Group num: 91, TC(X
Commercial aspects of semi-reusable launch systems
NASA Astrophysics Data System (ADS)
Obersteiner, M. H.; Müller, H.; Spies, H.
2003-07-01
This paper presents a business planning model for a commercial space launch system. The financing model is based on market analyses and projections combined with market capture models. An operations model is used to derive the annual cash income. Parametric cost modeling, development and production schedules are used for quantifying the annual expenditures, the internal rate of return, break even point of positive cash flow and the respective prices per launch. Alternative consortia structures, cash flow methods, capture rates and launch prices are used to examine the sensitivity of the model. Then the model is applied for a promising semi-reusable launcher concept, showing the general achievability of the commercial approach and the necessary pre-conditions.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Minimum noise impact aircraft trajectories
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Melton, R. G.
1981-01-01
Numerical optimization is used to compute the optimum flight paths, based upon a parametric form that implicitly includes some of the problem restrictions. The other constraints are formulated as penalties in the cost function. Various aircraft on multiple trajectores (landing and takeoff) can be considered. The modular design employed allows for the substitution of alternate models of the population distribution, aircraft noise, flight paths, and annoyance, or for the addition of other features (e.g., fuel consumption) in the cost function. A reduction in the required amount of searching over local minima was achieved through use of the presence of statistical lateral dispersion in the flight paths.
Turboprop cargo aircraft systems study
NASA Technical Reports Server (NTRS)
Muehlbauer, J. C.; Hewell, J. G., Jr.; Lindenbaum, S. P.; Randall, C. C.; Searle, N.; Stone, R. G., Jr.
1981-01-01
The effects of using advanced turboprop propulsion systems to reduce the fuel consumption and direct operating costs of cargo aircraft were studied, and the impact of these systems on aircraft noise and noise prints around a terminal area was determined. Parametric variations of aircraft and propeller characteristics were investigated to determine their effects on noiseprint areas, fuel consumption, and direct operating costs. From these results, three aircraft designs were selected and subjected to design refinements and sensitivity analyses. Three competitive turbofan aircraft were also defined from parametric studies to provide a basis for comparing the two types of propulsion.
On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi
2008-01-01
Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Free-form geometric modeling by integrating parametric and implicit PDEs.
Du, Haixia; Qin, Hong
2007-01-01
Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.
Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F
2017-05-01
This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the shortcomings of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-the-art in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority weaknesses within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
NASA Astrophysics Data System (ADS)
Garagnani, S.; Manferdini, A. M.
2013-02-01
Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.
Study of solid rocket motor for space shuttle booster. Volume 4: Cost
NASA Technical Reports Server (NTRS)
1972-01-01
The cost data for solid propellant rocket engines for use with the space shuttle are presented. The data are based on the selected 156 inch parallel and series burn configurations. Summary cost data are provided for the production of the 120 inch and 260 inch configurations. Graphs depicting parametric cost estimating relationships are included.
Layout design-based research on optimization and assessment method for shipbuilding workshop
NASA Astrophysics Data System (ADS)
Liu, Yang; Meng, Mei; Liu, Shuang
2013-06-01
The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.
ERIC Educational Resources Information Center
Maydeu-Olivares, Albert
2005-01-01
Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
The linear transformation model with frailties for the analysis of item response times.
Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A
2013-02-01
The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Creating A Data Base For Design Of An Impeller
NASA Technical Reports Server (NTRS)
Prueger, George H.; Chen, Wei-Chung
1993-01-01
Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.
Technology needs for lunar and Mars space transfer systems
NASA Technical Reports Server (NTRS)
Woodcock, Gordon R.; Cothran, Bradley C.; Donahue, Benjamin; Mcghee, Jerry
1991-01-01
The determination of appropriate space transportation technologies and operating modes is discussed with respect to both lunar and Mars missions. Three levels of activity are set forth to examine the sensitivity of transportation preferences including 'minimum,' 'full science,' and 'industrialization and settlement' categories. High-thrust-profile missions for lunar and Mars transportation are considered in terms of their relative advantages, and transportation options are defined in terms of propulsion and braking technologies. Costs and life-cycle cost estimates are prepared for the transportation preferences by using a parametric cost model, and a return-on-investment summary is given. Major technological needs for the programs are listed and include storable propulsion systems; cryogenic engines and fluids management; aerobraking; and nuclear thermal, nuclear electric, electric, and solar electric propulsion technologies.
Cost and efficiency of disaster waste disposal: A case study of the Great East Japan Earthquake.
Sasao, Toshiaki
2016-12-01
This paper analyzes the cost and efficiency of waste disposal associated with the Great East Japan Earthquake. The following two analyses were performed: (1) a popular parametric approach, which is an ordinary least squares (OLS) method to estimate the factors that affect the disposal costs; (2) a non-parametric approach, which is a two-stage data envelopment analysis (DEA) to analyze the efficiency of each municipality and clarify the best performance of the disaster waste management. Our results indicate that a higher recycling rate of disaster waste and a larger amount of tsunami sediments decrease the average disposal costs. Our results also indicate that area-wide management increases the average cost. In addition, the efficiency scores were observed to vary widely by municipality, and more temporary incinerators and secondary waste stocks improve the efficiency scores. However, it is likely that the radioactive contamination from the Fukushima Daiichi nuclear power station influenced the results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Graham, Christopher N; Hechmati, Guy; Fakih, Marwan G; Knox, Hediyyih N; Maglinte, Gregory A; Hjelmgren, Jonas; Barber, Beth; Schwartzberg, Lee S
2015-01-01
To compare the costs of first-line treatment with panitumumab + FOLFOX in comparison to cetuximab + FOLFIRI among patients with wild-type (WT) RAS metastatic colorectal cancer (mCRC) in the US. A cost-minimization model was developed assuming similar treatment efficacy between both regimens. The model estimated the costs associated with drug acquisition, treatment administration frequency (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions. Average anti-EGFR doses were calculated from the ASPECCT clinical trial, and average doses of chemotherapy regimens were based on product labels. Using the medical component of the consumer price index, adverse event costs were inflated to 2014 US dollars, and all other costs were reported in 2014 US dollars. The time horizon for the model was based on average first-line progression-free survival of a WT RAS patient, estimated from parametric survival analyses of PRIME clinical trial data. Relative to cetuximab + FOLFIRI in the first-line treatment of WT RAS mCRC, the cost-minimization model demonstrated lower projected drug acquisition, administration, and adverse event costs for patients who received panitumumab + FOLFOX. The overall cost per patient for first-line treatment was $179,219 for panitumumab + FOLFOX vs $202,344 for cetuximab + FOLFIRI, resulting in a per-patient saving of $23,125 (11.4%) in favor of panitumumab + FOLFOX. From a value perspective, the cost-minimization model supports panitumumab + FOLFOX instead of cetuximab + FOLFIRI as the preferred first-line treatment of WT RAS mCRC patients requiring systemic therapy.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
Stirling heat pump external heat systems - An appliance perspective
NASA Astrophysics Data System (ADS)
Vasilakis, Andrew D.; Thomas, John F.
A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.
Stirling heat pump external heat systems: An appliance perspective
NASA Astrophysics Data System (ADS)
Vasilakis, A. D.; Thomas, J. F.
1992-08-01
A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS system was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.
Li, Bin; Chen, Kan; Tian, Lianfang; Yeboah, Yao; Ou, Shanxing
2013-01-01
The segmentation and detection of various types of nodules in a Computer-aided detection (CAD) system present various challenges, especially when (1) the nodule is connected to a vessel and they have very similar intensities; (2) the nodule with ground-glass opacity (GGO) characteristic possesses typical weak edges and intensity inhomogeneity, and hence it is difficult to define the boundaries. Traditional segmentation methods may cause problems of boundary leakage and "weak" local minima. This paper deals with the above mentioned problems. An improved detection method which combines a fuzzy integrated active contour model (FIACM)-based segmentation method, a segmentation refinement method based on Parametric Mixture Model (PMM) of juxta-vascular nodules, and a knowledge-based C-SVM (Cost-sensitive Support Vector Machines) classifier, is proposed for detecting various types of pulmonary nodules in computerized tomography (CT) images. Our approach has several novel aspects: (1) In the proposed FIACM model, edge and local region information is incorporated. The fuzzy energy is used as the motivation power for the evolution of the active contour. (2) A hybrid PMM Model of juxta-vascular nodules combining appearance and geometric information is constructed for segmentation refinement of juxta-vascular nodules. Experimental results of detection for pulmonary nodules show desirable performances of the proposed method.
NASA Astrophysics Data System (ADS)
Meinke, I.
2003-04-01
A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.
The cost of colorectal cancer according to the TNM stage.
Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu
2017-02-01
The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachid B. Slimane; Francis S. Lau; Javad Abbasian
2000-10-01
The objective of this program is to develop an economical process for hydrogen production, with no additional carbon dioxide emission, through the thermal decomposition of hydrogen sulfide (H{sub 2}S) in H{sub 2}S-rich waste streams to high-purity hydrogen and elemental sulfur. The novel feature of the process being developed is the superadiabatic combustion (SAC) of part of the H{sub 2}S in the waste stream to provide the thermal energy required for the decomposition reaction such that no additional energy is required. The program is divided into two phases. In Phase 1, detailed thermochemical and kinetic modeling of the SAC reactor withmore » H{sub 2}S-rich fuel gas and air/enriched air feeds is undertaken to evaluate the effects of operating conditions on exit gas products and conversion efficiency, and to identify key process parameters. Preliminary modeling results are used as a basis to conduct a thorough evaluation of SAC process design options, including reactor configuration, operating conditions, and productivity-product separation schemes, with respect to potential product yields, thermal efficiency, capital and operating costs, and reliability, ultimately leading to the preparation of a design package and cost estimate for a bench-scale reactor testing system to be assembled and tested in Phase 2 of the program. A detailed parametric testing plan was also developed for process design optimization and model verification in Phase 2. During Phase 2 of this program, IGT, UIC, and industry advisors UOP and BP Amoco will validate the SAC concept through construction of the bench-scale unit and parametric testing. The computer model developed in Phase 1 will be updated with the experimental data and used in future scale-up efforts. The process design will be refined and the cost estimate updated. Market survey and assessment will continue so that a commercial demonstration project can be identified.« less
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
NASA Technical Reports Server (NTRS)
ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.
2005-01-01
The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.
Prevalence Incidence Mixture Models
The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.
Andersson, Therese M L; Dickman, Paul W; Eloranta, Sandra; Lambert, Paul C
2011-06-22
When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. © 2011 Andersson et al; licensee BioMed Central Ltd.
2011-01-01
Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598
Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung
2014-10-01
Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
Silva, R; Dow, P; Dubay, R; Lissandrello, C; Holder, J; Densmore, D; Fiering, J
2017-09-01
Acoustic manipulation has emerged as a versatile method for microfluidic separation and concentration of particles and cells. Most recent demonstrations of the technology use piezoelectric actuators to excite resonant modes in silicon or glass microchannels. Here, we focus on acoustic manipulation in disposable, plastic microchannels in order to enable a low-cost processing tool for point-of-care diagnostics. Unfortunately, the performance of resonant acoustofluidic devices in plastic is hampered by a lack of a predictive model. In this paper, we build and test a plastic blood-bacteria separation device informed by a design of experiments approach, parametric rapid prototyping, and screening by image-processing. We demonstrate that the new device geometry can separate bacteria from blood while operating at 275% greater flow rate as well as reduce the power requirement by 82%, while maintaining equivalent separation performance and resolution when compared to the previously published plastic acoustofluidic separation device.
Phase mismatched optical parametric generation in semiconductor magnetoplasma
NASA Astrophysics Data System (ADS)
Dubey, Swati; Ghosh, S.; Jain, Kamal
2017-05-01
Optical parametric generation involves the interaction of pump, signal, and idler waves satisfying law of conservation of energy. Phase mismatch parameter plays important role for the spatial distribution of the field along the medium. In this paper instead of exactly matching wave vector, a small mismatch is admitted with a degree of phase velocity mismatch between these waves. Hence the medium must possess certain finite coherence length. This wave mixing process is well explained by coupled mode theory and one dimensional hydrodynamic model. Based on this scheme, expressions for threshold pump field and transmitted intensity have been derived. It is observed that the threshold pump intensity and transmitted intensity can be manipulated by varying doping concentration and magnetic field under phase mismatched condition. A compound semiconductor crystal of n-InSb is assumed to be shined at 77 K by a 10.6μm CO2 laser with photon energy well below band gap energy of the crystal, so that only free charge carrier influence the optical properties of the medium for the I.R. parametric generation in a semiconductor plasma medium. Favorable parameters were explored to incite the said process keeping in mind the cost effectiveness and conversion efficiency of the process.
First-Order Parametric Model of Reflectance Spectra for Dyed Fabrics
2016-02-19
Unclassified Unlimited 31 Daniel Aiken (202) 279-5293 Parametric modeling Inverse /direct analysis This report describes a first-order parametric model of...Appendix: Dielectric Response Functions for Dyes Obtained by Inverse Analysis ……………………………...…………………………………………………….19 1 First-Order Parametric...which provides for both their inverse and direct modeling1. The dyes considered contain spectral features that are of interest to the U.S. Navy for
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Upatising, Benjavan; Wood, Douglas L; Kremers, Walter K; Christ, Sharon L; Yih, Yuehwern; Hanson, Gregory J; Takahashi, Paul Y
2015-01-01
From 1992 to 2008, older adults in the United States incurred more healthcare expense per capita than any other age group. Home telemonitoring has emerged as a potential solution to reduce these costs, but evidence is mixed. The primary aim of the study was to evaluate whether the mean difference in total direct medical cost consequence between older adults receiving additional home telemonitoring care (TELE) (n=102) and those receiving usual medical care (UC) (n=103) were significant. Inpatient, outpatient, emergency department, decedents, survivors, and 30-day readmission costs were evaluated as secondary aim. Multivariate generalized linear models (GLMs) and parametric bootstrapping method were used to model cost and to determine significance of the cost differences. We also compared the differences in arithmetic mean costs. From the conditional GLMs, the estimated mean cost differences (TELE versus UC) for total, inpatient, outpatient, and ED were -$9,537 (p=0.068), -$8,482 (p =0.098), -$1,160 (p=0.177), and $106 (p=0.619), respectively. Mean postenrollment cost was 11% lower than the prior year for TELE versus 22% higher for UC. The ratio of mean cost for decedents to survivors was 2.1:1 (TELE) versus 12.7:1 (UC). There were no significant differences in the mean total cost between the two treatment groups. The TELE group had less variability in cost of care, lower decedents to survivors cost ratio, and lower total 30-day readmission cost than the UC group.
EVA/ORU model architecture using RAMCOST
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.
1990-01-01
A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.
Modeling integrated water user decisions in intermittent supply systems
NASA Astrophysics Data System (ADS)
Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.
2007-07-01
We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.
1988-09-30
DISTRiUTIONWtAVAu.ASiUTY OF REPORT 2b. DECLASSIFICATON I DOW’NGRADING SCHEDULE 4. PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION...REPORT NUMBER(S) 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION WCW Associates, Inc. Battel le 6c. ADDRESS...Cycle Figure 8-6 Induced Change k Figure 8-7 The Culture- Performance Relationship Figure 8-8 Culture-Productivity Bridge vi Preface Our cultural
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Phase noise suppression through parametric filtering
NASA Astrophysics Data System (ADS)
Cassella, Cristian; Strachan, Scott; Shaw, Steven W.; Piazza, Gianluca
2017-02-01
In this work, we introduce and experimentally demonstrate a parametric phase noise suppression technique, which we call "parametric phase noise filtering." This technique is based on the use of a solid-state parametric amplifier operating in its instability region and included in a non-autonomous feedback loop connected at the output of a noisy oscillator. We demonstrate that such a system behaves as a parametrically driven Duffing resonator and can operate at special points where it becomes largely immune to the phase fluctuations that affect the oscillator output signal. A prototype of a parametric phase noise filter (PFIL) was designed and fabricated to operate in the very-high-frequency range. The PFIL prototype allowed us to significantly reduce the phase noise at the output of a commercial signal generator operating around 220 MHz. Noise reduction of 16 dB (40×) and 13 dB (20×) were obtained, respectively, at 1 and 10 kHz offsets from the carrier frequency. The demonstration of this phase noise suppression technique opens up scenarios in the development of passive and low-cost phase noise cancellation circuits for any application demanding high quality frequency generation.
Conceptual design of reduced energy transports
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.
1975-01-01
This paper reports the results of a conceptual design study of new, near-term fuel-conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the 'optimum' configuration characteristics and on economic performance. Supercritical wing technology and advanced engine cycles were assumed. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It yields about 30% more seat-miles/gal than current wide-body aircraft. At the higher fuel costs anticipated in the future, the reduced energy design has about the same economic performance as existing designs.
NASA Technical Reports Server (NTRS)
Staigner, P. J.; Abbott, J. M.
1980-01-01
Two parallel contracted studies were conducted. Each contractor investigated three base cases and parametric variations about these base cases. Each contractor concluded that two of the base cases (a plant using separate firing of an advanced high temperature regenerative air heater with fuel from an advanced coal gasifier and a plant using an intermediate temperature metallic recuperative heat exchanger to heat oxygen enriched combustion air) were comparable in both performance and cost of electricity. The contractors differed in the level of their cost estimates with the capital cost estimates for the MHD topping cycle and the magnet subsystem in particular accounting for a significant part of the difference. The impact of the study on the decision to pursue a course which leads to an oxygen enriched plant as the first commercial MHD plant is described.
Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah
2014-01-01
This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver L.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
NASA Technical Reports Server (NTRS)
1972-01-01
Mission analysis is discussed, including the consolidation and expansion of mission equipment and experiment characteristics, and determination of simplified shuttle flight schedule. Parametric analysis of standard space hardware and preliminary shuttle/payload constraints analysis are evaluated, along with the cost impact of low cost standard hardware.
Towards an Empirically Based Parametric Explosion Spectral Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Walter, W R; Ruppert, S
2009-08-31
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Direct Estimation of Kinetic Parametric Images for Dynamic PET
Wang, Guobao; Qi, Jinyi
2013-01-01
Dynamic positron emission tomography (PET) can monitor spatiotemporal distribution of radiotracer in vivo. The spatiotemporal information can be used to estimate parametric images of radiotracer kinetics that are of physiological and biochemical interests. Direct estimation of parametric images from raw projection data allows accurate noise modeling and has been shown to offer better image quality than conventional indirect methods, which reconstruct a sequence of PET images first and then perform tracer kinetic modeling pixel-by-pixel. Direct reconstruction of parametric images has gained increasing interests with the advances in computing hardware. Many direct reconstruction algorithms have been developed for different kinetic models. In this paper we review the recent progress in the development of direct reconstruction algorithms for parametric image estimation. Algorithms for linear and nonlinear kinetic models are described and their properties are discussed. PMID:24396500
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
Streamlining the Design Tradespace for Earth Imaging Constellations
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hughes, Steven P.; Le Moigne, Jacqueline J.
2016-01-01
Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth scientists seeking to leverage the capabilities of multiple spacecraft working in support of a common goal.
The impact of parametrized convection on cloud feedback.
Webb, Mark J; Lock, Adrian P; Bretherton, Christopher S; Bony, Sandrine; Cole, Jason N S; Idelkadi, Abderrahmane; Kang, Sarah M; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D; Zhao, Ming
2015-11-13
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that 'ConvOff' models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. © 2015 The Authors.
The impact of parametrized convection on cloud feedback
Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming
2015-01-01
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback is discussed. PMID:26438278
Development of a Multivariable Parametric Cost Analysis for Space-Based Telescopes
NASA Technical Reports Server (NTRS)
Dollinger, Courtnay
2011-01-01
Over the past 400 years, the telescope has proven to be a valuable tool in helping humankind understand the Universe around us. The images and data produced by telescopes have revolutionized planetary, solar, stellar, and galactic astronomy and have inspired a wide range of people, from the child who dreams about the images seen on NASA websites to the most highly trained scientist. Like all scientific endeavors, astronomical research must operate within the constraints imposed by budget limitations. Hence the importance of understanding cost: to find the balance between the dreams of scientists and the restrictions of the available budget. By logically analyzing the data we have collected for over thirty different telescopes from more than 200 different sources, statistical methods, such as plotting regressions and residuals, can be used to determine what drives the cost of telescopes to build and use a cost model for space-based telescopes. Previous cost models have focused their attention on ground-based telescopes due to limited data for space telescopes and the larger number and longer history of ground-based astronomy. Due to the increased availability of cost data from recent space-telescope construction, we have been able to produce and begin testing a comprehensive cost model for space telescopes, with guidance from the cost models for ground-based telescopes. By separating the variables that effect cost such as diameter, mass, wavelength, density, data rate, and number of instruments, we advance the goal to better understand the cost drivers of space telescopes.. The use of sophisticated mathematical techniques to improve the accuracy of cost models has the potential to help society make informed decisions about proposed scientific projects. An improved knowledge of cost will allow scientists to get the maximum value returned for the money given and create a harmony between the visions of scientists and the reality of a budget.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephan
2008-01-01
The upcoming retirement of the Baby Boomers will leave a workforce age gap between the younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of the workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This paper will dig into Productivity and Realization Factors and how they get applied to bi-monthly (payroll) data for true full-time equivalent (FTE) calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs analysis/solutions, from simple FTE cost-estimating relationships (CERs) versus CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of alternative performance-based CERs and how they get applied into the Center's forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and other internal home-grown cost models which may impact the productivity factors for future NASA missions.
Application of an enhanced discrete element method to oil and gas drilling processes
NASA Astrophysics Data System (ADS)
Ubach, Pere Andreu; Arrufat, Ferran; Ring, Lev; Gandikota, Raju; Zárate, Francisco; Oñate, Eugenio
2016-03-01
The authors present results on the use of the discrete element method (DEM) for the simulation of drilling processes typical in the oil and gas exploration industry. The numerical method uses advanced DEM techniques using a local definition of the DEM parameters and combined FEM-DEM procedures. This paper presents a step-by-step procedure to build a DEM model for analysis of the soil region coupled to a FEM model for discretizing the drilling tool that reproduces the drilling mechanics of a particular drill bit. A parametric study has been performed to determine the model parameters in order to maintain accurate solutions with reduced computational cost.
A mixture model for bovine abortion and foetal survival.
Hanson, Timothy; Bedrick, Edward J; Johnson, Wesley O; Thurmond, Mark C
2003-05-30
The effect of spontaneous abortion on the dairy industry is substantial, costing the industry on the order of US dollars 200 million per year in California alone. We analyse data from a cohort study of nine dairy herds in Central California. A key feature of the analysis is the observation that only a relatively small proportion of cows will abort (around 10;15 per cent), so that it is inappropriate to analyse the time-to-abortion (TTA) data as if it were standard censored survival data, with cows that fail to abort by the end of the study treated as censored observations. We thus broaden the scope to consider the analysis of foetal lifetime distribution (FLD) data for the cows, with the dual goals of characterizing the effects of various risk factors on (i). the likelihood of abortion and, conditional on abortion status, on (ii). the risk of early versus late abortion. A single model is developed to accomplish both goals with two sets of specific herd effects modelled as random effects. Because multimodal foetal hazard functions are expected for the TTA data, both a parametric mixture model and a non-parametric model are developed. Furthermore, the two sets of analyses are linked because of anticipated dependence between the random herd effects. All modelling and inferences are accomplished using modern Bayesian methods. Copyright 2003 John Wiley & Sons, Ltd.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
Probing the dynamics of dark energy with divergence-free parametrizations: A global fit study
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Xin
2011-09-01
The CPL parametrization is very important for investigating the property of dark energy with observational data. However, the CPL parametrization only respects the past evolution of dark energy but does not care about the future evolution of dark energy, since w ( z ) diverges in the distant future. In a recent paper [J.Z. Ma, X. Zhang, Phys. Lett. B 699 (2011) 233], a robust, novel parametrization for dark energy, w ( z ) = w + w ( l n ( 2 + z ) 1 + z - l n 2 ) , has been proposed, successfully avoiding the future divergence problem in the CPL parametrization. On the other hand, an oscillating parametrization (motivated by an oscillating quintom model) can also avoid the future divergence problem. In this Letter, we use the two divergence-free parametrizations to probe the dynamics of dark energy in the whole evolutionary history. In light of the data from 7-year WMAP temperature and polarization power spectra, matter power spectrum of SDSS DR7, and SN Ia Union2 sample, we perform a full Markov Chain Monte Carlo exploration for the two dynamical dark energy models. We find that the best-fit dark energy model is a quintom model with the EOS across -1 during the evolution. However, though the quintom model is more favored, we find that the cosmological constant still cannot be excluded.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-01-01
Development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by CPS systems, identification of that portion of the satellite market addressable by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level were achieved. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-08-01
Development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by CPS systems, identification of that portion of the satellite market addressable by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level were achieved. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
Mazzotta, Laura; Cozzani, Mauro; Mutinelli, Sabrina; Castaldo, Attilio; Silvestrini-Biavati, Armando
2013-01-01
Objectives. To build a 3D parametric model to detect shape and volume of dental roots, from a panoramic radiograph (PAN) of the patient. Materials and Methods. A PAN and a cone beam computed tomography (CBCT) of a patient were acquired. For each tooth, various parameters were considered (coronal and root lengths and widths): these were measured from the CBCT and from the PAN. Measures were compared to evaluate the accuracy level of PAN measurements. By using a CAD software, parametric models of an incisor and of a molar were constructed employing B-spline curves and free-form surfaces. PAN measures of teeth 2.1 and 3.6 were assigned to the parametric models; the same two teeth were segmented from CBCT. The two models were superimposed to assess the accuracy of the parametric model. Results. PAN measures resulted to be accurate and comparable with all other measurements. From model superimposition the maximum error resulted was 1.1 mm on the incisor crown and 2 mm on the molar furcation. Conclusion. This study shows that it is possible to build a 3D parametric model starting from 2D information with a clinically valid accuracy level. This can ultimately lead to a crown-root movement simulation. PMID:23554814
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.
2013-01-01
Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.
Optimized and Automated design of Plasma Diagnostics for Additive Manufacture
NASA Astrophysics Data System (ADS)
Stuber, James; Quinley, Morgan; Melnik, Paul; Sieck, Paul; Smith, Trevor; Chun, Katherine; Woodruff, Simon
2016-10-01
Despite having mature designs, diagnostics are usually custom designed for each experiment. Most of the design can be now be automated to reduce costs (engineering labor, and capital cost). We present results from scripted physics modeling and parametric engineering design for common optical and mechanical components found in many plasma diagnostics and outline the process for automated design optimization that employs scripts to communicate data from online forms through proprietary and open-source CAD and FE codes to provide a design that can be sent directly to a printer. As a demonstration of design automation, an optical beam dump, baffle and optical components are designed via an automated process and printed. Supported by DOE SBIR Grant DE-SC0011858.
Information transfer satellite concept study. Volume 4: computer manual
NASA Technical Reports Server (NTRS)
Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.
1971-01-01
The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.
Upatising, Benjavan; Wood, Douglas L.; Kremers, Walter K.; Christ, Sharon L.; Yih, Yuehwern; Hanson, Gregory J.
2015-01-01
Abstract Background: From 1992 to 2008, older adults in the United States incurred more healthcare expense per capita than any other age group. Home telemonitoring has emerged as a potential solution to reduce these costs, but evidence is mixed. The primary aim of the study was to evaluate whether the mean difference in total direct medical cost consequence between older adults receiving additional home telemonitoring care (TELE) (n=102) and those receiving usual medical care (UC) (n=103) were significant. Inpatient, outpatient, emergency department, decedents, survivors, and 30-day readmission costs were evaluated as secondary aim. Materials and Methods: Multivariate generalized linear models (GLMs) and parametric bootstrapping method were used to model cost and to determine significance of the cost differences. We also compared the differences in arithmetic mean costs. Results: From the conditional GLMs, the estimated mean cost differences (TELE versus UC) for total, inpatient, outpatient, and ED were −$9,537 (p=0.068), −$8,482 (p =0.098), −$1,160 (p=0.177), and $106 (p=0.619), respectively. Mean postenrollment cost was 11% lower than the prior year for TELE versus 22% higher for UC. The ratio of mean cost for decedents to survivors was 2.1:1 (TELE) versus 12.7:1 (UC). Conclusions: There were no significant differences in the mean total cost between the two treatment groups. The TELE group had less variability in cost of care, lower decedents to survivors cost ratio, and lower total 30-day readmission cost than the UC group. PMID:25453392
Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail
2016-11-14
We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
NASA Astrophysics Data System (ADS)
Bencheikh, imane; el hajjaji, souad; abourouh, imane; Kitane, Said; Dahchour, Abdelmalek; El M'Rabet, Mohammadine
2017-04-01
Wastewater treatment is the subject of several studies through decades. Interest is continuously oriented to provide cheaper and efficient methods of treatment. Several methods of treatment exit including coagulation flocculation, filtration, precipitation, ozonation, ion exchange, reverse osmosis, advanced oxidation process. The use of these methods proved limited because of their high investment and operational cost. Adsorption can be an efficient low-cost process to remove pollutants from wastewater. This method of treatment calls for an solid adsorbent which constitutes the purification tool. Agricultural wastes have been widely exploited in this case .As we know the agricultural wastes are an important source of water pollution once discharged into the aquatic environment (river, sea ...). The valorization of such wastes and their use allows the prevention of this problem with an economic and environment benefits. In this context our study aimed testing the wastewater treatment capacity by adsorption onto holocellulose resulting from the valorization of an agriculture waste. In this study, methylene blue (MB) and methyl orange (MO) are selected as models pollutants for evaluating the holocellulose adsorbent capacity. The kinetics of adsorption is performed using UV-visible spectroscopy. In order to study the effect of the main parameters for the adsorption process and their mutual interaction, a full factorial design (type nk) has been used.23 full factorial design analysis was performed to screen the parameters affecting dye removal efficiency. Using the experimental results, a linear mathematical model representing the influence of the different parameters and their interactions was obtained. The parametric study showed that efficiency of the adsorption system (Dyes/ Holocellulose) is mainly linked to pH variation. The best yields were observed for MB at pH=10 and for MO at pH=2.The kinetic data was analyzed using different models , namely , the pseudo-first- order kinetic model the pseudo-second-order kinetic model , and the Intraparticule diffusion model . It was observed that the pseudo -second -order model was the best model describing the adsorption behavior of MB and MO onto holocellulose. This suggested that the adsorption mechanism might be a chemisorptions process. In general, the results indicated that holocellulose is suitable as sorbent material for adsorption of MO and MB from aqueous solutions for its high effectiveness and low cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross
2016-06-15
Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less
Parametric models of reflectance spectra for dyed fabrics
NASA Astrophysics Data System (ADS)
Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph
2016-05-01
This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.
Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models
NASA Astrophysics Data System (ADS)
Scherrer, Robert J.
2015-08-01
We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephen
2008-01-01
The upcoming retirement of the Baby Boomers on the horizon will leave a performance gap between younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This papers will dig into Productivity and Realization Factors and how they get applied to bimonthly (payroll data) for true FTE calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs solutions, from simple - full time equivalent (FTE) cost estimating relationships CERs, to complex - CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of different alternatives performance based cost estimating relationships (CERs) and how they get applied into the Center s forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and internal home-grown cost models which may impact the productivity factors for future NASA missions.
Preliminary design study of advanced multistage axial flow core compressors
NASA Technical Reports Server (NTRS)
Wisler, D. C.; Koch, C. C.; Smith, L. H., Jr.
1977-01-01
A preliminary design study was conducted to identify an advanced core compressor for use in new high-bypass-ratio turbofan engines to be introduced into commercial service in the 1980's. An evaluation of anticipated compressor and related component 1985 state-of-the-art technology was conducted. A parametric screening study covering a large number of compressor designs was conducted to determine the influence of the major compressor design features on efficiency, weight, cost, blade life, aircraft direct operating cost, and fuel usage. The trends observed in the parametric screening study were used to develop three high-efficiency, high-economic-payoff compressor designs. These three compressors were studied in greater detail to better evaluate their aerodynamic and mechanical feasibility.
A general framework for parametric survival analysis.
Crowther, Michael J; Lambert, Paul C
2014-12-30
Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Bridge maintenance to enhance corrosion resistance and performance of steel girder bridges
NASA Astrophysics Data System (ADS)
Moran Yanez, Luis M.
The integrity and efficiency of any national highway system relies on the condition of the various components. Bridges are fundamental elements of a highway system, representing an important investment and a strategic link that facilitates the transport of persons and goods. The cost to rehabilitate or replace a highway bridge represents an important expenditure to the owner, who needs to evaluate the correct time to assume that cost. Among the several factors that affect the condition of steel highway bridges, corrosion is identified as the main problem. In the USA corrosion is the primary cause of structurally deficient steel bridges. The benefit of regular high-pressure superstructure washing and spot painting were evaluated as effective maintenance activities to reduce the corrosion process. The effectiveness of steel girder washing was assessed by developing models of corrosion deterioration of composite steel girders and analyzing steel coupons at the laboratory under atmospheric corrosion for two alternatives: when high-pressure washing was performed and when washing was not considered. The effectiveness of spot painting was assessed by analyzing the corrosion on steel coupons, with small damages, unprotected and protected by spot painting. A parametric analysis of corroded steel girder bridges was considered. The emphasis was focused on the parametric analyses of corroded steel girder bridges under two alternatives: (a) when steel bridge girder washing is performed according to a particular frequency, and (b) when no bridge washing is performed to the girders. The reduction of structural capacity was observed for both alternatives along the structure service life, estimated at 100 years. An economic analysis, using the Life-Cycle Cost Analysis method, demonstrated that it is more cost-effective to perform steel girder washing as a scheduled maintenance activity in contrast to the no washing alternative.
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
Model risk for European-style stock index options.
Gençay, Ramazan; Gibson, Rajna
2007-01-01
In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
COMPASS Final Report: Low Cost Robotic Lunar Lander
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Oleson, Steven R.
2010-01-01
The COllaborative Modeling for the Parametric Assessment of Space Systems (COMPASS) team designed a robotic lunar Lander to deliver an unspecified payload (greater than zero) to the lunar surface for the lowest cost in this 2006 design study. The purpose of the low cost lunar lander design was to investigate how much payload can an inexpensive chemical or Electric Propulsion (EP) system deliver to the Moon s surface. The spacecraft designed as the baseline out of this study was a solar powered robotic lander, launched on a Minotaur V launch vehicle on a direct injection trajectory to the lunar surface. A Star 27 solid rocket motor does lunar capture and performs 88 percent of the descent burn. The Robotic Lunar Lander soft-lands using a hydrazine propulsion system to perform the last 10% of the landing maneuver, leaving the descent at a near zero, but not exactly zero, terminal velocity. This low-cost robotic lander delivers 10 kg of science payload instruments to the lunar surface.
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2012-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2013-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.
Sizing procedures for sun-tracking PV system with batteries
NASA Astrophysics Data System (ADS)
Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu
2017-11-01
Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The optimum hardware miniaturization level with the lowest cost impact for space biology hardware was determined. Space biology hardware and/or components/subassemblies/assemblies which are the most likely candidates for application of miniaturization are to be defined and relative cost impacts of such miniaturization are to be analyzed. A mathematical or statistical analysis method with the capability to support development of parametric cost analysis impacts for levels of production design miniaturization are provided.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems
NASA Astrophysics Data System (ADS)
Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai
2017-09-01
In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
Z/sub n/ Baxter model: symmetries and the Belavin parametrization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richey, M.P.; Tracy, C.A.
1986-02-01
The Z/sub n/ Baxter model is an exactly solvable lattice model in the special case of the Belavin parametrization. For this parametrization the authors calculate the partition function in an antiferromagnetic region and the order parameter in a ferromagnetic region. They find that the order parameter is expressible in terms of a modular function of level n which for n=2 is the Onsager-Yang-Baxter result. In addition they determine the symmetry group of the finite lattice partition function for the general Z/sub n/ Baxter model.
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Parametric Modeling for Fluid Systems
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Housing price prediction: parametric versus semi-parametric spatial hedonic models
NASA Astrophysics Data System (ADS)
Montero, José-María; Mínguez, Román; Fernández-Avilés, Gema
2018-01-01
House price prediction is a hot topic in the economic literature. House price prediction has traditionally been approached using a-spatial linear (or intrinsically linear) hedonic models. It has been shown, however, that spatial effects are inherent in house pricing. This article considers parametric and semi-parametric spatial hedonic model variants that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The models are represented as a mixed model that allow for the estimation of the smoothing parameters along with the other parameters of the model. To assess the out-of-sample performance of the models, the paper uses a database containing the price and characteristics of 10,512 homes in Madrid, Spain (Q1 2010). The results obtained suggest that the nonlinear models accounting for spatial heterogeneity and flexible nonlinear relationships between some of the individual or areal characteristics of the houses and their prices are the best strategies for house price prediction.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Thick electrodes for Li-ion batteries: A model based analysis
NASA Astrophysics Data System (ADS)
Danner, Timo; Singh, Madhav; Hein, Simon; Kaiser, Jörg; Hahn, Horst; Latz, Arnulf
2016-12-01
Li-ion batteries are commonly used in portable electronic devices due to their outstanding energy and power density. A remaining issue which hinders the breakthrough e.g. in the automotive sector is the high production cost. For low power applications, such as stationary storage, batteries with electrodes thicker than 300 μm were suggested. High energy densities can be attained with only a few electrode layers which reduces production time and cost. However, mass and charge transport limitations can be severe at already small C-rates due to long transport pathways. In this article we use a detailed 3D micro-structure resolved model to investigate limiting factors for battery performance. The model is parametrized with data from the literature and dedicated experiments and shows good qualitative agreement with experimental discharge curves of thick NMC-graphite Li-ion batteries. The model is used to assess the effect of inhomogeneities in carbon black distribution and gives answers to the possible occurrence of lithium plating during battery charge. Based on our simulations we can predict optimal operation strategies and improved design concepts for future Li-ion batteries employing thick electrodes.
Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.
2016-01-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003
Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H
2017-05-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
A Parametric Approach to Numerical Modeling of TKR Contact Forces
Lundberg, Hannah J.; Foucher, Kharma C.; Wimmer, Markus A.
2009-01-01
In vivo knee contact forces are difficult to determine using numerical methods because there are more unknown forces than equilibrium equations available. We developed parametric methods for computing contact forces across the knee joint during the stance phase of level walking. Three-dimensional contact forces were calculated at two points of contact between the tibia and the femur, one on the lateral aspect of the tibial plateau, and one on the medial side. Muscle activations were parametrically varied over their physiologic range resulting in a solution space of contact forces. The obtained solution space was reasonably small and the resulting force pattern compared well to a previous model from the literature for kinematics and external kinetics from the same patient. Peak forces of the parametric model and the previous model were similar for the first half of the stance phase, but differed for the second half. The previous model did not take into account the transverse external moment about the knee and could not calculate muscle activation levels. Ultimately, the parametric model will result in more accurate contact force inputs for total knee simulators, as current inputs are not generally based on kinematics and kinetics inputs from TKR patients. PMID:19155015
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach
NASA Astrophysics Data System (ADS)
Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.
2017-07-01
The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.
NASA Technical Reports Server (NTRS)
Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)
2002-01-01
This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Coupled oscillators in identification of nonlinear damping of a real parametric pendulum
NASA Astrophysics Data System (ADS)
Olejnik, Paweł; Awrejcewicz, Jan
2018-01-01
A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.
Modeling the directivity of parametric loudspeaker
NASA Astrophysics Data System (ADS)
Shi, Chuang; Gan, Woon-Seng
2012-09-01
The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.
Orbit transfer vehicle engine study. Volume 2: Technical report
NASA Technical Reports Server (NTRS)
1980-01-01
The orbit transfer vehicle (OTV) engine study provided parametric performance, engine programmatic, and cost data on the complete propulsive spectrum that is available for a variety of high energy, space maneuvering missions. Candidate OTV engines from the near term RL 10 (and its derivatives) to advanced high performance expander and staged combustion cycle engines were examined. The RL 10/RL 10 derivative performance, cost and schedule data were updated and provisions defined which would be necessary to accommodate extended low thrust operation. Parametric performance, weight, envelope, and cost data were generated for advanced expander and staged combustion OTV engine concepts. A prepoint design study was conducted to optimize thrust chamber geometry and cooling, engine cycle variations, and controls for an advanced expander engine. Operation at low thrust was defined for the advanced expander engine and the feasibility and design impact of kitting was investigated. An analysis of crew safety and mission reliability was conducted for both the staged combustion and advanced expander OTV engine candidates.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
NASA Astrophysics Data System (ADS)
Trivailo, O.; Sippel, M.; Şekercioğlu, Y. A.
2012-08-01
The primary purpose of this paper is to review currently existing cost estimation methods, models, tools and resources applicable to the space sector. While key space sector methods are outlined, a specific focus is placed on hardware cost estimation on a system level, particularly for early mission phases during which specifications and requirements are not yet crystallised, and information is limited. For the space industry, cost engineering within the systems engineering framework is an integral discipline. The cost of any space program now constitutes a stringent design criterion, which must be considered and carefully controlled during the entire program life cycle. A first step to any program budget is a representative cost estimate which usually hinges on a particular estimation approach, or methodology. Therefore appropriate selection of specific cost models, methods and tools is paramount, a difficult task given the highly variable nature, scope as well as scientific and technical requirements applicable to each program. Numerous methods, models and tools exist. However new ways are needed to address very early, pre-Phase 0 cost estimation during the initial program research and establishment phase when system specifications are limited, but the available research budget needs to be established and defined. Due to their specificity, for vehicles such as reusable launchers with a manned capability, a lack of historical data implies that using either the classic heuristic approach such as parametric cost estimation based on underlying CERs, or the analogy approach, is therefore, by definition, limited. This review identifies prominent cost estimation models applied to the space sector, and their underlying cost driving parameters and factors. Strengths, weaknesses, and suitability to specific mission types and classes are also highlighted. Current approaches which strategically amalgamate various cost estimation strategies both for formulation and validation of an estimate, and techniques and/or methods to attain representative and justifiable cost estimates are consequently discussed. Ultimately, the aim of the paper is to establish a baseline for development of a non-commercial, low cost, transparent cost estimation methodology to be applied during very early program research phases at a complete vehicle system level, for largely unprecedented manned launch vehicles in the future. This paper takes the first step to achieving this through the identification, analysis and understanding of established, existing techniques, models, tools and resources relevant within the space sector.
NASA Astrophysics Data System (ADS)
Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania
2017-03-01
Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Anurose, T. J.; Subrahamanyam, D. Bala
2013-06-01
We discuss the impact of the differential treatment of the roughness lengths for momentum and heat (z_{0m} and z_{0h}) in the flux parametrization scheme of the high-resolution regional model (HRM) for a heterogeneous terrain centred around Thiruvananthapuram, India (8.5°N, 76.9°E). The magnitudes of sensible heat flux ( H) obtained from HRM simulations using the original parametrization scheme differed drastically from the concurrent in situ observations. With a view to improving the performance of this parametrization scheme, two distinct modifications are incorporated: (1) In the first method, a constant value of 100 is assigned to the z_{0m}/z_{0h} ratio; (2) and in the second approach, this ratio is treated as a function of time. Both these modifications in the HRM model showed significant improvements in the H simulations for Thiruvananthapuram and its adjoining regions. Results obtained from the present study provide a first-ever comparison of H simulations using the modified parametrization scheme in the HRM model with in situ observations for the Indian coastal region, and suggest a differential treatment of z_{0m} and z_{0h} in the flux parametrization scheme.
Regression analysis on the variation in efficiency frontiers for prevention stage of HIV/AIDS.
Kamae, Maki S; Kamae, Isao; Cohen, Joshua T; Neumann, Peter J
2011-01-01
To investigate how the cost effectiveness of preventing HIV/AIDS varies across possible efficiency frontiers (EFs) by taking into account potentially relevant external factors, such as prevention stage, and how the EFs can be characterized using regression analysis given uncertainty of the QALY-cost estimates. We reviewed cost-effectiveness estimates for the prevention and treatment of HIV/AIDS published from 2002-2007 and catalogued in the Tufts Medical Center Cost-Effectiveness Analysis (CEA) Registry. We constructed efficiency frontier (EF) curves by plotting QALYs against costs, using methods used by the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany. We stratified the QALY-cost ratios by prevention stage, country of study, and payer perspective, and estimated EF equations using log and square-root models. A total of 53 QALY-cost ratios were identified for HIV/AIDS in the Tufts CEA Registry. Plotted ratios stratified by prevention stage were visually grouped into a cluster consisting of primary/secondary prevention measures and a cluster consisting of tertiary measures. Correlation coefficients for each cluster were statistically significant. For each cluster, we derived two EF equations - one based on the log model, and one based on the square-root model. Our findings indicate that stratification of HIV/AIDS interventions by prevention stage can yield distinct EFs, and that the correlation and regression analyses are useful for parametrically characterizing EF equations. Our study has certain limitations, such as the small number of included articles and the potential for study populations to be non-representative of countries of interest. Nonetheless, our approach could help develop a deeper appreciation of cost effectiveness beyond the deterministic approach developed by IQWiG.
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
NASA Astrophysics Data System (ADS)
Franzini, Guilherme Rosa; Santos, Rebeca Caramêz Saraiva; Pesce, Celso Pupo
2017-12-01
This paper aims to numerically investigate the effects of parametric instability on piezoelectric energy harvesting from the transverse galloping of a square prism. A two degrees-of-freedom reduced-order model for this problem is proposed and numerically integrated. A usual quasi-steady galloping model is applied, where the transverse force coefficient is adopted as a cubic polynomial function with respect to the angle of attack. Time-histories of nondimensional prism displacement, electric voltage and power dissipated at both the dashpot and the electrical resistance are obtained as functions of the reduced velocity. Both, oscillation amplitude and electric voltage, increased with the reduced velocity for all parametric excitation conditions tested. For low values of reduced velocity, 2:1 parametric excitation enhances the electric voltage. On the other hand, for higher reduced velocities, a 1:1 parametric excitation (i.e., the same as the natural frequency) enhances both oscillation amplitude and electric voltage. It has been also found that, depending on the parametric excitation frequency, the harvested electrical power can be amplified in 70% when compared to the case under no parametric excitation.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
The report gives results of a recent analysis showing that cost- effective indoor radon reduction technology is required for houses with initial radon concentrations < 4 pCi/L, because 78-86% of the national lung cancer risk due to radon is associated with those houses. ctive soi...
Probing kinematics and fate of the Universe with linearly time-varying deceleration parameter
NASA Astrophysics Data System (ADS)
Akarsu, Özgür; Dereli, Tekin; Kumar, Suresh; Xu, Lixin
2014-02-01
The parametrizations q = q 0+ q 1 z and q = q 0+ q 1(1 - a/ a 0) (Chevallier-Polarski-Linder parametrization) of the deceleration parameter, which are linear in cosmic redshift z and scale factor a , have been frequently utilized in the literature to study the kinematics of the Universe. In this paper, we follow a strategy that leads to these two well-known parametrizations of the deceleration parameter as well as an additional new parametrization, q = q 0+ q 1(1 - t/ t 0), which is linear in cosmic time t. We study the features of this linearly time-varying deceleration parameter in contrast with the other two linear parametrizations. We investigate in detail the kinematics of the Universe by confronting the three models with the latest observational data. We further study the dynamics of the Universe by considering the linearly time-varying deceleration parameter model in comparison with the standard ΛCDM model. We also discuss the future of the Universe in the context of the models under consideration.
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-06-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter trade-off, arising from the simultaneous variations of different physical parameters, which increase the nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parametrization and acquisition arrangement. An appropriate choice of model parametrization is important to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parametrizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) data for unconventional heavy oil reservoir characterization. Six model parametrizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^' }) and velocity-impedance-II (α″, β″ and I_S^' }). We begin analysing the interparameter trade-off by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. We discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter trade-offs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter trade-offs for various model parametrizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parametrization, the inverted density profile can be overestimated, underestimated or spatially distorted. Among the six cases, only the velocity-density parametrization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
Modelling and multi-parametric control for delivery of anaesthetic agents.
Dua, Pinky; Dua, Vivek; Pistikopoulos, Efstratios N
2010-06-01
This article presents model predictive controllers (MPCs) and multi-parametric model-based controllers for delivery of anaesthetic agents. The MPC can take into account constraints on drug delivery rates and state of the patient but requires solving an optimization problem at regular time intervals. The multi-parametric controller has all the advantages of the MPC and does not require repetitive solution of optimization problem for its implementation. This is achieved by obtaining the optimal drug delivery rates as a set of explicit functions of the state of the patient. The derivation of the controllers relies on using detailed models of the system. A compartmental model for the delivery of three drugs for anaesthesia is developed. The key feature of this model is that mean arterial pressure, cardiac output and unconsciousness of the patient can be simultaneously regulated. This is achieved by using three drugs: dopamine (DP), sodium nitroprusside (SNP) and isoflurane. A number of dynamic simulation experiments are carried out for the validation of the model. The model is then used for the design of model predictive and multi-parametric controllers, and the performance of the controllers is analyzed.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Quantifying parametric uncertainty in the Rothermel model
S. Goodrick
2008-01-01
The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
NASA Astrophysics Data System (ADS)
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Robust Machine Learning Variable Importance Analyses of Medical Conditions for Health Care Spending.
Rose, Sherri
2018-03-11
To propose nonparametric double robust machine learning in variable importance analyses of medical conditions for health spending. 2011-2012 Truven MarketScan database. I evaluate how much more, on average, commercially insured enrollees with each of 26 of the most prevalent medical conditions cost per year after controlling for demographics and other medical conditions. This is accomplished within the nonparametric targeted learning framework, which incorporates ensemble machine learning. Previous literature studying the impact of medical conditions on health care spending has almost exclusively focused on parametric risk adjustment; thus, I compare my approach to parametric regression. My results demonstrate that multiple sclerosis, congestive heart failure, severe cancers, major depression and bipolar disorders, and chronic hepatitis are the most costly medical conditions on average per individual. These findings differed from those obtained using parametric regression. The literature may be underestimating the spending contributions of several medical conditions, which is a potentially critical oversight. If current methods are not capturing the true incremental effect of medical conditions, undesirable incentives related to care may remain. Further work is needed to directly study these issues in the context of federal formulas. © Health Research and Educational Trust.
Linking Physical Climate Research and Economic Assessments of Mitigation Policies
NASA Astrophysics Data System (ADS)
Stainforth, David; Calel, Raphael
2017-04-01
Evaluating climate change policies requires economic assessments which balance the costs and benefits of climate action. A certain class of Integrated Assessment Models (IAMS) are widely used for this type of analysis; DICE, PAGE and FUND are three of the most influential. In the economics community there has been much discussion and debate about the economic assumptions implemented within these models. Two aspects in particular have gained much attention: i) the costs of damages resulting from climate change - the so-called damage function, and ii) the choice of discount rate applied to future costs and benefits. There has, however, been rather little attention given to the consequences of the choices made in the physical climate models within these IAMS. Here we discuss the practical aspects of the implementation of the physical models in these IAMS, as well as the implications of choices made in these physical science components for economic assessments[1]. We present a simple breakdown of how these IAMS differently represent the climate system as a consequence of differing underlying physical models, different parametric assumptions (for parameters representing, for instance, feedbacks and ocean heat uptake) and different numerical approaches to solving the models. We present the physical and economic consequences of these differences and reflect on how we might better incorporate the latest physical science understanding in economic models of this type. [1] Calel, R. and Stainforth D.A., "On the Physics of Three Integrated Assessment Models", Bulletin of the American Meteorological Society, in press.
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
Parametric estimation for reinforced concrete relief shelter for Aceh cases
NASA Astrophysics Data System (ADS)
Atthaillah; Saputra, Eri; Iqbal, Muhammad
2018-05-01
This paper was a work in progress (WIP) to discover a rapid parametric framework for post-disaster permanent shelter’s materials estimation. The intended shelters were reinforced concrete construction with bricks as its wall. Inevitably, in post-disaster cases, design variations were needed to help suited victims condition. It seemed impossible to satisfy a beneficiary with a satisfactory design utilizing the conventional method. This study offered a parametric framework to overcome slow construction-materials estimation issue against design variations. Further, this work integrated parametric tool, which was Grasshopper to establish algorithms that simultaneously model, visualize, calculate and write the calculated data to a spreadsheet in a real-time. Some customized Grasshopper components were created using GHPython scripting for a more optimized algorithm. The result from this study was a partial framework that successfully performed modeling, visualization, calculation and writing the calculated data simultaneously. It meant design alterations did not escalate time needed for modeling, visualization, and material estimation. Further, the future development of the parametric framework will be made open source.
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
NASA Technical Reports Server (NTRS)
Oleson, Steven R.; McGuire, Melissa L.
2011-01-01
The COllaborative Modeling and Parametric Assessment of Space Systems (COMPASS) team was approached by the NASA Glenn Research Center (GRC) In-Space Project to perform a design session to develop Radioisotope Electric Propulsion (REP) Spacecraft Conceptual Designs (with cost, risk, and reliability) for missions of three different classes: New Frontier s Class Centaur Orbiter (with Trojan flyby), Flagship, and Discovery. The designs will allow trading of current and future propulsion systems. The results will directly support technology development decisions. The results of the Flagship mission design are reported in this document
Geometric Model for a Parametric Study of the Blended-Wing-Body Airplane
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne; Smith, Robert E.; Sadrehaghighi, Ideen; Wiese, Micharl R.
1996-01-01
A parametric model is presented for the blended-wing-body airplane, one concept being proposed for the next generation of large subsonic transports. The model is defined in terms of a small set of parameters which facilitates analysis and optimization during the conceptual design process. The model is generated from a preliminary CAD geometry. From this geometry, airfoil cross sections are cut at selected locations and fitted with analytic curves. The airfoils are then used as boundaries for surfaces defined as the solution of partial differential equations. Both the airfoil curves and the surfaces are generated with free parameters selected to give a good representation of the original geometry. The original surface is compared with the parametric model, and solutions of the Euler equations for compressible flow are computed for both geometries. The parametric model is a good approximation of the CAD model and the computed solutions are qualitatively similar. An optimal NURBS approximation is constructed and can be used by a CAD model for further refinement or modification of the original geometry.
NASA Astrophysics Data System (ADS)
Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent
2018-03-01
In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
NASA Astrophysics Data System (ADS)
Hajicek, Joshua J.; Selesnick, Ivan W.; Henin, Simon; Talmadge, Carrick L.; Long, Glenis R.
2018-05-01
Stimulus frequency otoacoustic emissions (SFOAEs) were evoked and estimated using swept-frequency tones with and without the use of swept suppressor tones. SFOAEs were estimated using a least-squares fitting procedure. The estimated SFOAEs for the two paradigms (with- and without-suppression) were similar in amplitude and phase. The fitting procedure minimizes the square error between a parametric model of total ear-canal pressure (with unknown amplitudes and phases) and ear-canal pressure acquired during each paradigm. Modifying the parametric model to allow SFOAE amplitude and phase to vary over time revealed additional amplitude and phase fine structure in the without-suppressor, but not the with-suppressor paradigm. The use of a time-varying parametric model to estimate SFOAEs without-suppression may provide additional information about cochlear mechanics not available when using a with-suppressor paradigm.
NASA Astrophysics Data System (ADS)
Jiang, Jin-Wu
2015-08-01
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
Jiang, Jin-Wu
2015-08-07
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
Performance and Cost Evaluation of Cryogenic Solid Propulsion Systems
NASA Astrophysics Data System (ADS)
Adirim, Harry; Lo, Roger; Knecht, Thomas; Reinbold, Georg-Friedrich; Poller, Sascha
2002-01-01
Under the sponsorship of the German Aerospace Center DLR, Cryogenic Solid Propulsion (CSP) is now in its 6th year of R&D. The development proceeds as a joint international university-, small business-, space industry- and professional research effort (Berlin University of Technology / AI: Aerospace Institute, Berlin / Bauman Moscow State Technical University, Russia / ASTRIUM GmbH, Bremen / Fraunhofer Institute for Chemical Technology, Berghausen). This paper aims at introducing CSP as a novel type of chemical propellant that uses frozen liquids as Oxygen (SOX) or Hydrogen Peroxide (SH2O2) inside of a coherent solid Hydrocarbon (PE, PU or HTPB) matrix in solid rocket motors. Theoretically any conceivable chemical rocket propellant combination (including any environmentally benign ,,green propellant") can be used in solid rocket propellant motors if the definition of solids is not restricted to "solid at ambient temperature". The CSP concept includes all suitable high energy propellant combinations, but is not limited to them. Any liquid or hybrid bipropellant combination is (Isp-wise) superior to any conventional solid propellant formulation. While CSPs do share some of the disadvantages of solid propulsion (e.g. lack of cooling fluid and preset thrust-time function), they definitely share one of their most attractive advantages: the low number of components that is the base for high reliability and low cost of structures. In this respect, CSPs are superior to liquid propellant rocket motors with whom, they share the high Isp performance. High performance, low cost, low pollution CSP technology could bring about a near term improvement for chemical Earth-to-orbit high thrust propulsion. In the long run it could surpass conventional chemical propulsion because it is better suited for applying High Energy Density Matter (HEDM) than any other mode of propulsion. So far, ongoing preliminary analyses have not shown any insuperable problems in areas of concern, such as cooling equipment and its operation during fabrication and launch, neither were there problems with thrust to weight ratio of un-cooled but insulated Cryogenic Solid Motors which ascend into their trajectory while leaving the cooling equipment at the launch pad. In performance calculations for new launchers with CSP-replacements of boosters or existing stages, ARIANE 5 and a 3-stage launcher with CSP - 1st stage into GTO serve as examples. For keeping payload-capacity in the reference orbit constant, the modeling of a rocket system essentially requires a process of iteration, in which the propellant mass is varied as central parameter and - with the help of a CSP mass-model - all other dimensions of the booster are derived from mass models etc. accordingly. The process is repeated until the payload resulting from GTO track-optimization corresponds with that of the model ARIANE 5 in sufficient approximation. Under the assumptions made, the application of cryogenic motors lead to a clear reduction of the launch mass. This is essentially caused by the lower propellant mass and secondary by the reduced structure mass. Finally cost calculations have been made by ASTRIUM and demonstrated the cost saving potential of CSP propulsion. For estimating development, production, ground facilities, and operating cost, the parametric cost modeling tool has been used in combination with Cost Estimating Relationships (CER). Parametric cost models only allow comparative analyses, therefore ARIANE 5 in its current (P1) configuration has been estimated using the same mission model as for the CSP launcher. As conclusion of these cost assessment can be stated, that the utilization of cryogenic solid propulsion could offer a considerable cost savings potential. Academic and industrial cooperation is crucial for the challenging R&D work required. It will take the combined capacities of all experts involved to unlock the promises of clean, high Isp CSP propulsion for chemical Earth-to-orbit transportation in next 10 to 15 years to come.
Understanding the Sustainability of Retail Food Recovery
Phillips, Caleb; Hoenigman, Rhonda; Higbee, Becky; Reed, Tom
2013-01-01
In this paper we study the simultaneous problems of food waste and hunger in the context of food (waste) rescue and redistribution as a means for mitigating hunger. To this end, we develop an empirical model that can be used in Monte Carlo simulations to study the dynamics of the underlying problem. Our model's parameters are derived from a data set provided by a large food bank and food rescue organization in north central Colorado. We find that food supply is a non-parametric heavy-tailed process that is well modeled with an extreme value peaks over threshold model. Although the underlying process is stochastic, the basic approach of food rescue and redistribution to meet hunger demand appears to be feasible. The ultimate sustainability of this model is intimately tied to the rate at which food expires and hence the ability to preserve and quickly transport and redistribute food. The cost of the redistribution is related to the number and density of participating suppliers. The results show that costs can be reduced (and supply increased) simply by recruiting additional donors to participate. With sufficient funding and manpower, a significant amount of food can be rescued from the waste stream and used to feed the hungry. PMID:24130716
Parametric instabilities of rotor-support systems with application to industrial ventilators
NASA Technical Reports Server (NTRS)
Parszewski, Z.; Krodkiemski, T.; Marynowski, K.
1980-01-01
Rotor support systems interaction with parametric excitation is considered for both unequal principal shaft stiffness (generators) and offset disc rotors (ventilators). Instability regions and types of instability are computed in the first case, and parametric resonances in the second case. Computed and experimental results are compared for laboratory machine models. A field case study of parametric vibrations in industrial ventilators is reported. Computed parametric resonances are confirmed in field measurements, and some industrial failures are explained. Also the dynamic influence and gyroscopic effect of supporting structures are shown and computed.
Parametric robust control and system identification: Unified approach
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1994-01-01
Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J
2014-01-01
Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r(2) = ∼ 0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r(2) = ∼ 0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness.
Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J.
2014-01-01
Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r2 = ∼0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r2 = ∼0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. PMID:25101782
Guidance, navigation, and control trades for an Electric Orbit Transfer Vehicle
NASA Astrophysics Data System (ADS)
Zondervan, K. P.; Bauer, T. A.; Jenkin, A. B.; Metzler, R. A.; Shieh, R. A.
The USAF Space Division initiated the Electric Insertion Transfer Experiment (ELITE) in the fall of 1988. The ELITE space mission is planned for the mid 1990s and will demonstrate technological readiness for the development of operational solar-powered electric orbit transfer vehicles (EOTVs). To minimize the cost of ground operations, autonomous flight is desirable. Thus, the guidance, navigation, and control (GNC) functions of an EOTV should reside on board. In order to define GNC requirements for ELITE, parametric trades must be performed for an operational solar-powered EOTV so that a clearer understanding of the performance aspects is obtained. Parametric trades for the GNC subsystems have provided insight into the relationship between pointing accuracy, transfer time, and propellant utilization. Additional trades need to be performed, taking into account weight, cost, and degree of autonomy.
NASA Astrophysics Data System (ADS)
Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.
2018-02-01
The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.
NASA Astrophysics Data System (ADS)
Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.
2018-07-01
The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
ABALUCK, JASON
2017-01-01
We explore the in- and out- of sample robustness of tests for choice inconsistencies based on parameter restrictions in parametric models, focusing on tests proposed by Ketcham, Kuminoff and Powers (KKP). We argue that their non-parametric alternatives are inherently conservative with respect to detecting mistakes. We then show that our parametric model is robust to KKP’s suggested specification checks, and that comprehensive goodness of fit measures perform better with our model than the expected utility model. Finally, we explore the robustness of our 2011 results to alternative normative assumptions highlighting the role of brand fixed effects and unobservable characteristics. PMID:29170561
Nonrelativistic approaches derived from point-coupling relativistic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lourenco, O.; Dutra, M.; Delfino, A.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
CuBe: parametric modeling of 3D foveal shape using cubic Bézier
Yadav, Sunil Kumar; Motamedi, Seyedamirhosein; Oberwahrenbrock, Timm; Oertel, Frederike Cosima; Polthier, Konrad; Paul, Friedemann; Kadas, Ella Maria; Brandt, Alexander U.
2017-01-01
Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups. PMID:28966857
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
Astronomy sortie mission definition study. Addendum: Follow-on analyses
NASA Technical Reports Server (NTRS)
1973-01-01
Results of design analyses, trade studies, and planning data of the Astronomy Sortie Mission Definition Study are presented. An in-depth analysis of UV instruments, nondeployed solar payload, and on-orbit access is presented. Planning data are considered, including the cost and schedules associated with the astronomy instruments and/or support hardware. Costs are presented in a parametric fashion.
High-level PC-based laser system modeling
NASA Astrophysics Data System (ADS)
Taylor, Michael S.
1991-05-01
Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.
Parametric, nonparametric and parametric modelling of a chaotic circuit time series
NASA Astrophysics Data System (ADS)
Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.
2000-09-01
The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.
Zhai, Haibo; Rubin, Edward S
2013-03-19
This study investigates the feasibility of polymer membrane systems for postcombustion carbon dioxide (CO(2)) capture at coal-fired power plants. Using newly developed performance and cost models, our analysis shows that membrane systems configured with multiple stages or steps are capable of meeting capture targets of 90% CO(2) removal efficiency and 95+% product purity. A combined driving force design using both compressors and vacuum pumps is most effective for reducing the cost of CO(2) avoided. Further reductions in the overall system energy penalty and cost can be obtained by recycling a portion of CO(2) via a two-stage, two-step membrane configuration with air sweep to increase the CO(2) partial pressure of feed flue gas. For a typical plant with carbon capture and storage, this yielded a 15% lower cost per metric ton of CO(2) avoided compared to a plant using a current amine-based capture system. A series of parametric analyses also is undertaken to identify paths for enhancing the viability of membrane-based capture technology.
Zhang, Y M; Huang, G; Lu, H W; He, Li
2015-08-15
A key issue facing integrated water resources management and water pollution control is to address the vague parametric information. A full credibility-based chance-constrained programming (FCCP) method is thus developed by introducing the new concept of credibility into the modeling framework. FCCP can deal with fuzzy parameters appearing concurrently in the objective and both sides of the constraints of the model, but also provide a credibility level indicating how much confidence one can believe the optimal modeling solutions. The method is applied to Heshui River watershed in the south-central China for demonstration. Results from the case study showed that groundwater would make up for the water shortage in terms of the shrinking surface water and rising water demand, and the optimized total pumpage of groundwater from both alluvial and karst aquifers would exceed 90% of its maximum allowable levels when credibility level is higher than or equal to 0.9. It is also indicated that an increase in credibility level would induce a reduction in cost for surface water acquisition, a rise in cost from groundwater withdrawal, and negligible variation in cost for water pollution control. Copyright © 2015 Elsevier B.V. All rights reserved.
Sail Plan Configuration Optimization for a Modern Clipper Ship
NASA Astrophysics Data System (ADS)
Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz
2002-11-01
We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).
2010-02-01
98 8.4.5 Training Screening ............................. .................................................................99 8.5 Experimental...associated with the proposed parametric model. Several im- portant issues are discussed, including model order selection, training screening , and time...parameters associated with the NS-AR model. In addition, we develop model order selection, training screening , and time-series based whitening and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, S A; Trunov, V I; Pestryakov, Efim V
2013-05-31
We have developed a technique for investigating the evolution of spatial inhomogeneities in high-power laser systems based on multi-stage parametric amplification. A linearised model of the inhomogeneity development is first devised for parametric amplification with the small-scale self-focusing taken into account. It is shown that the application of this model gives the results consistent (with high accuracy and in a wide range of inhomogeneity parameters) with the calculation without approximations. Using the linearised model, we have analysed the development of spatial inhomogeneities in a petawatt laser system based on multi-stage parametric amplification, developed at the Institute of Laser Physics, Siberianmore » Branch of the Russian Academy of Sciences (ILP SB RAS). (control of laser radiation parameters)« less
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Determining the optimal model for role-substitution in NHS dental services in the United Kingdom.
Brocklehurst, Paul; Birch, Stephen; McDonald, Ruth; Tickle, Martin
2013-09-24
Role-substitution describes a model of dental care where Dental Care Professionals (DCPs) provide some of the clinical activity previously undertaken by General Dental Practitioners. This has the potential to increase technical efficiency, the capacity to care and reduce costs. Technical efficiency is defined as the production of the maximum amount of output from a given amount of input so that the service operates at the production frontier i.e. optimal level of productivity. Academic research into technical efficiency is becoming increasingly utilised in health care, although no studies have investigated the efficiency of NHS dentistry or role-substitution in high-street dental practices. The aim of this study is to examine the barriers and enablers that exist for role-substitution in general dental practices in the NHS and to determine the most technically efficient model for role-substitution. A screening questionnaire will be sent to DCPs to determine the type and location of role-substitutive models employed in NHS dental practices in the United Kingdom (UK). Semi-structured interviews will then be conducted with practice owners, DCPs and patients at selected sites identified by the questionnaire. Detail will be recorded about the organisational structure of the dental team, the number of NHS hours worked and the clinical activity undertaken. The interviews will continue until saturation and will record the views and attitudes of the members of the dental team. Final numbers of interviews will be determined by saturation.The second work-stream will examine the technical efficiency of the selected practices using Data Envelopment Analysis and Stochastic Frontier Modeling. The former is a non-parametric technique and is considered to be a highly flexible approach for applied health applications. The latter is parametric and is based on frontier regression models that estimate a conventional cost function. Maximising health for a given level and mix of resources is an ethical imperative for health service planners. This study will determine the technical efficiency of role-substitution and so address one of the key recommendations of the Independent Review of NHS dentistry in England.
Determining the optimal model for role-substitution in NHS dental services in the United Kingdom
2013-01-01
Background Role-substitution describes a model of dental care where Dental Care Professionals (DCPs) provide some of the clinical activity previously undertaken by General Dental Practitioners. This has the potential to increase technical efficiency, the capacity to care and reduce costs. Technical efficiency is defined as the production of the maximum amount of output from a given amount of input so that the service operates at the production frontier i.e. optimal level of productivity. Academic research into technical efficiency is becoming increasingly utilised in health care, although no studies have investigated the efficiency of NHS dentistry or role-substitution in high-street dental practices. The aim of this study is to examine the barriers and enablers that exist for role-substitution in general dental practices in the NHS and to determine the most technically efficient model for role-substitution. Methods/design A screening questionnaire will be sent to DCPs to determine the type and location of role-substitutive models employed in NHS dental practices in the United Kingdom (UK). Semi-structured interviews will then be conducted with practice owners, DCPs and patients at selected sites identified by the questionnaire. Detail will be recorded about the organisational structure of the dental team, the number of NHS hours worked and the clinical activity undertaken. The interviews will continue until saturation and will record the views and attitudes of the members of the dental team. Final numbers of interviews will be determined by saturation. The second work-stream will examine the technical efficiency of the selected practices using Data Envelopment Analysis and Stochastic Frontier Modeling. The former is a non-parametric technique and is considered to be a highly flexible approach for applied health applications. The latter is parametric and is based on frontier regression models that estimate a conventional cost function. Discussion Maximising health for a given level and mix of resources is an ethical imperative for health service planners. This study will determine the technical efficiency of role-substitution and so address one of the key recommendations of the Independent Review of NHS dentistry in England. PMID:24063247
Revisiting dark energy models using differential ages of galaxies
NASA Astrophysics Data System (ADS)
Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Biesiada, Marek
2017-03-01
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H0dt/dz and its derivative (H0d2t/dz2) using a model independent technique called non-parametric smoothing. Basically, dt/dz is the change in the age of the object as a function of redshift which is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H(z) data. Further, we calculate H0dt/dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H0dt/dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H0dt/dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z>= 0.3.
40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures
Code of Federal Regulations, 2010 CFR
2010-07-01
... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...
Historic Bim: a New Repository for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Banfi, F.; Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-05-01
Recent developments in Building Information Modelling (BIM) technologies are facilitating the management of historic complex structures using new applications. This paper proposes a generative method combining the morphological and typological aspects of the historic buildings (H-BIM), with a set of monitoring information. This combination of 3D digital survey, parametric modelling and monitoring datasets allows for the development of a system for archiving and visualizing structural health monitoring (SHM) data (Fig. 1). The availability of a BIM database allows one to integrate a different kind of data stored in different ways (e.g. reports, tables, graphs, etc.) with a representation directly connected to the 3D model of the structure with appropriate levels of detail (LoD). Data can be interactively accessed by selecting specific objects of the BIM, i.e. connecting the 3D position of the sensors installed with additional digital documentation. Such innovative BIM objects, which form a new BIM family for SHM, can be then reused in other projects, facilitating data archiving and exploitation of data acquired and processed. The application of advanced modeling techniques allows for the reduction of time and costs of the generation process, and support cooperation between different disciplines using a central workspace. However, it also reveals new challenges for parametric software and exchange formats. The case study presented is the medieval bridge Azzone Visconti in Lecco (Italy), in which multi-temporal vertical movements during load testing were integrated into H-BIM.
Modeling and Visualization Process of the Curve of Pen Point by GeoGebra
ERIC Educational Resources Information Center
Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba
2013-01-01
This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective
NASA Astrophysics Data System (ADS)
Hamaker, Joseph W.
1996-03-01
This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.
Orun, A B; Seker, H; Uslan, V; Goodyer, E; Smith, G
2017-06-01
The textural structure of 'skin age'-related subskin components enables us to identify and analyse their unique characteristics, thus making substantial progress towards establishing an accurate skin age model. This is achieved by a two-stage process. First by the application of textural analysis using laser speckle imaging, which is sensitive to textural effects within the λ = 650 nm spectral band region. In the second stage, a Bayesian inference method is used to select attributes from which a predictive model is built. This technique enables us to contrast different skin age models, such as the laser speckle effect against the more widely used normal light (LED) imaging method, whereby it is shown that our laser speckle-based technique yields better results. The method introduced here is non-invasive, low cost and capable of operating in real time; having the potential to compete against high-cost instrumentation such as confocal microscopy or similar imaging devices used for skin age identification purposes. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Marginally specified priors for non-parametric Bayesian estimation
Kessler, David C.; Hoff, Peter D.; Dunson, David B.
2014-01-01
Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813
Fluoxetine and imipramine: are there differences in cost-utility for depression in primary care?
Serrano-Blanco, Antoni; Suárez, David; Pinto-Meza, Alejandra; Peñarrubia, Maria T; Haro, Josep Maria
2009-02-01
Depressive disorders generate severe personal burden and high economic costs. Cost-utility analyses of the different therapeutical options are crucial to policy-makers and clinicians. Previous cost-utility studies, comparing selective serotonin reuptake inhibitors and tricyclic antidepressants, have used modelling techniques or have not included indirect costs in the economic analyses. To determine the cost-utility of fluoxetine compared with imipramine for treating depressive disorders in primary care. A 6-month randomized prospective naturalistic study comparing fluoxetine with imipramine was conducted in three primary care centres in Spain. One hundred and three patients requiring antidepressant treatment for a DSM-IV depressive disorder were included in the study. Patients were randomized either to fluoxetine (53 patients) or to imipramine (50 patients) treatment. Patients were treated with antidepressants according to their general practitioner's usual clinical practice. Outcome measures were the quality of life tariff of the European Quality of Life Questionnaire: EuroQoL-5D (five domains), direct costs, indirect costs and total costs. Subjects were evaluated at the beginning of treatment and after 1, 3 and 6 months. Incremental cost-utility ratios (ICUR) were obtained. To address uncertainty in the ICUR's sampling distribution, non-parametric bootstrapping was carried out. Taking into account adjusted total costs and incremental quality of life gained, imipramine dominated fluoxetine with 81.5% of the bootstrap replications in the dominance quadrant. Imipramine seems to be a better cost-utility antidepressant option for treating depressive disorders in primary care.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
NASA Astrophysics Data System (ADS)
Ines, A. V. M.; Han, E.; Baethgen, W.
2017-12-01
Advances in seasonal climate forecasts (SCFs) during the past decades have brought great potential to improve agricultural climate risk managements associated with inter-annual climate variability. In spite of popular uses of crop simulation models in addressing climate risk problems, the models cannot readily take seasonal climate predictions issued in the format of tercile probabilities of most likely rainfall categories (i.e, below-, near- and above-normal). When a skillful SCF is linked with the crop simulation models, the informative climate information can be further translated into actionable agronomic terms and thus better support strategic and tactical decisions. In other words, crop modeling connected with a given SCF allows to simulate "what-if" scenarios with different crop choices or management practices and better inform the decision makers. In this paper, we present a decision support tool, called CAMDT (Climate Agriculture Modeling and Decision Tool), which seamlessly integrates probabilistic SCFs to DSSAT-CSM-Rice model to guide decision-makers in adopting appropriate crop and agricultural water management practices for given climatic conditions. The CAMDT has a functionality to disaggregate a probabilistic SCF into daily weather realizations (either a parametric or non-parametric disaggregation method) and to run DSSAT-CSM-Rice with the disaggregated weather realizations. The convenient graphical user-interface allows easy implementation of several "what-if" scenarios for non-technical users and visualize the results of the scenario runs. In addition, the CAMDT also translates crop model outputs to economic terms once the user provides expected crop price and cost. The CAMDT is a practical tool for real-world applications, specifically for agricultural climate risk management in the Bicol region, Philippines, having a great flexibility for being adapted to other crops or regions in the world. CAMDT GitHub: https://github.com/Agro-Climate/CAMDT
Hall, Peter S; McCabe, Christopher; Stein, Robert C; Cameron, David
2012-01-04
Multi-parameter genomic tests identify patients with early-stage breast cancer who are likely to derive little benefit from adjuvant chemotherapy. These tests can potentially spare patients the morbidity from unnecessary chemotherapy and reduce costs. However, the costs of the test must be balanced against the health benefits and cost savings produced. This economic evaluation compared genomic test-directed chemotherapy using the Oncotype DX 21-gene assay with chemotherapy for all eligible patients with lymph node-positive, estrogen receptor-positive early-stage breast cancer. We performed a cost-utility analysis using a state transition model to calculate expected costs and benefits over the lifetime of a cohort of women with estrogen receptor-positive lymph node-positive breast cancer from a UK perspective. Recurrence rates for Oncotype DX-selected risk groups were derived from parametric survival models fitted to data from the Southwest Oncology Group 8814 trial. The primary outcome was the incremental cost-effectiveness ratio, expressed as the cost (in 2011 GBP) per quality-adjusted life-year (QALY). Confidence in the incremental cost-effectiveness ratio was expressed as a probability of cost-effectiveness and was calculated using Monte Carlo simulation. Model parameters were varied deterministically and probabilistically in sensitivity analysis. Value of information analysis was used to rank priorities for further research. The incremental cost-effectiveness ratio for Oncotype DX-directed chemotherapy using a recurrence score cutoff of 18 was £5529 (US $8852) per QALY. The probability that test-directed chemotherapy is cost-effective was 0.61 at a willingness-to-pay threshold of £30 000 per QALY. Results were sensitive to the recurrence rate, long-term anthracycline-related cardiac toxicity, quality of life, test cost, and the time horizon. The highest priority for further research identified by value of information analysis is the recurrence rate in test-selected subgroups. There is substantial uncertainty regarding the cost-effectiveness of Oncotype DX-directed chemotherapy. It is particularly important that future research studies to inform cost-effectiveness-based decisions collect long-term outcome data.
Kral, L
2007-05-01
We present a complex stabilization and control system for a commercially available optical parametric oscillator. The system is able to stabilize the oscillator's output wavelength at a narrow spectral line of atomic iodine with subpicometer precision, allowing utilization of this solid-state parametric oscillator as a front end of a high-power photodissociation laser chain formed by iodine gas amplifiers. In such setup, a precise wavelength matching between the front end and the amplifier chain is necessary due to extremely narrow spectral lines of the gaseous iodine (approximately 20 pm). The system is based on a personal computer, a heated iodine cell, and a few other low-cost components. It automatically identifies the proper peak within the iodine absorption spectrum, and then keeps the oscillator tuned to this peak with high precision and reliability. The use of the solid-state oscillator as the front end allows us to use the whole iodine laser system as a pump laser for the optical parametric chirped pulse amplification, as it enables precise time synchronization with a signal Ti:sapphire laser.
Sgr A* Emission Parametrizations from GRMHD Simulations
NASA Astrophysics Data System (ADS)
Anantua, Richard; Ressler, Sean; Quataert, Eliot
2018-06-01
Galactic Center emission near the vicinity of the central black hole, Sagittarius (Sgr) A*, is modeled using parametrizations involving the electron temperature, which is found from general relativistic magnetohydrodynamic (GRMHD) simulations to be highest in the disk-outflow corona. Jet-motivated prescriptions generalizing equipartition of particle and magnetic energies, e.g., by scaling relativistic electron energy density to powers of the magnetic field strength, are also introduced. GRMHD jet (or outflow)/accretion disk/black hole (JAB) simulation postprocessing codes IBOTHROS and GRMONTY are employed in the calculation of images and spectra. Various parametric models reproduce spectral and morphological features, such as the sub-mm spectral bump in electron temperature models and asymmetric photon rings in equipartition-based models. The Event Horizon Telescope (EHT) will provide unprecedentedly high-resolution 230+ GHz observations of the "shadow" around Sgr A*'s supermassive black hole, which the synthetic models presented here will reverse-engineer. Both electron temperature and equipartition-based models can be constructed to be compatible with EHT size constraints for the emitting region of Sgr A*. This program sets the groundwork for devising a unified emission parametrization flexible enough to model disk, corona and outflow/jet regions with a small set of parameters including electron heating fraction and plasma beta.
A Parametric Computational Model of the Action Potential of Pacemaker Cells.
Ai, Weiwei; Patel, Nitish D; Roop, Partha S; Malik, Avinash; Andalam, Sidharta; Yip, Eugene; Allen, Nathan; Trew, Mark L
2018-01-01
A flexible, efficient, and verifiable pacemaker cell model is essential to the design of real-time virtual hearts that can be used for closed-loop validation of cardiac devices. A new parametric model of pacemaker action potential is developed to address this need. The action potential phases are modeled using hybrid automaton with one piecewise-linear continuous variable. The model can capture rate-dependent dynamics, such as action potential duration restitution, conduction velocity restitution, and overdrive suppression by incorporating nonlinear update functions. Simulated dynamics of the model compared well with previous models and clinical data. The results show that the parametric model can reproduce the electrophysiological dynamics of a variety of pacemaker cells, such as sinoatrial node, atrioventricular node, and the His-Purkinje system, under varying cardiac conditions. This is an important contribution toward closed-loop validation of cardiac devices using real-time heart models.
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
Parametric study of potential early commercial power plants Task 3-A MHD cost analysis
NASA Technical Reports Server (NTRS)
1983-01-01
The development of costs for an MHD Power Plant and the comparison of these costs to a conventional coal fired power plant are reported. The program is divided into three activities: (1) code of accounts review; (2) MHD pulverized coal power plant cost comparison; (3) operating and maintenance cost estimates. The scope of each NASA code of account item was defined to assure that the recently completed Task 3 capital cost estimates are consistent with the code of account scope. Improvement confidence in MHD plant capital cost estimates by identifying comparability with conventional pulverized coal fired (PCF) power plant systems is undertaken. The basis for estimating the MHD plant operating and maintenance costs of electricity is verified.
Structural cost optimization of photovoltaic central power station modules and support structure
NASA Technical Reports Server (NTRS)
Sutton, P. D.; Stolte, W. J.; Marsh, R. O.
1979-01-01
The results of a comprehensive study of photovoltaic module structural support concepts for photovoltaic central power stations and their associated costs are presented. The objective of the study has been the identification of structural cost drivers. Parametric structural design and cost analyses of complete array systems consisting of modules, primary support structures, and foundations were performed. Area related module cost was found to be constant with design, size, and loading. A curved glass module concept was evaluated and found to have the potential to significantly reduce panel structural costs. Conclusions of the study are: array costs do not vary greatly among the designs evaluated; panel and array costs are strongly dependent on design loading; and the best support configuration is load dependent
Multi-parametric variational data assimilation for hydrological forecasting
NASA Astrophysics Data System (ADS)
Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.
2017-12-01
Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.
Optimal control of epidemic information dissemination over networks.
Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng
2014-12-01
Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
Juricke, Stephan; Jung, Thomas
2014-01-01
The influence of a stochastic sea ice strength parametrization on the mean climate is investigated in a coupled atmosphere–sea ice–ocean model. The results are compared with an uncoupled simulation with a prescribed atmosphere. It is found that the stochastic sea ice parametrization causes an effective weakening of the sea ice. In the uncoupled model this leads to an Arctic sea ice volume increase of about 10–20% after an accumulation period of approximately 20–30 years. In the coupled model, no such increase is found. Rather, the stochastic perturbations lead to a spatial redistribution of the Arctic sea ice thickness field. A mechanism involving a slightly negative atmospheric feedback is proposed that can explain the different responses in the coupled and uncoupled system. Changes in integrated Antarctic sea ice quantities caused by the stochastic parametrization are generally small, as memory is lost during the melting season because of an almost complete loss of sea ice. However, stochastic sea ice perturbations affect regional sea ice characteristics in the Southern Hemisphere, both in the uncoupled and coupled model. Remote impacts of the stochastic sea ice parametrization on the mean climate of non-polar regions were found to be small. PMID:24842027
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
NASA Technical Reports Server (NTRS)
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
Space biology initiative program definition review. Trade study 4: Design modularity and commonality
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The relative cost impacts (up or down) of developing Space Biology hardware using design modularity and commonality is studied. Recommendations for how the hardware development should be accomplished to meet optimum design modularity requirements for Life Science investigation hardware will be provided. In addition, the relative cost impacts of implementing commonality of hardware for all Space Biology hardware are defined. Cost analysis and supporting recommendations for levels of modularity and commonality are presented. A mathematical or statistical cost analysis method with the capability to support development of production design modularity and commonality impacts to parametric cost analysis is provided.
A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit
NASA Technical Reports Server (NTRS)
Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.
2016-01-01
Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.
Bounded Parametric Model Checking for Elementary Net Systems
NASA Astrophysics Data System (ADS)
Knapik, Michał; Szreter, Maciej; Penczek, Wojciech
Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.
Elgart, Jorge Federico; Prestes, Mariana; Gonzalez, Lorena; Rucci, Enzo; Gagliardino, Juan Jose
2017-01-01
Despite the frequent association of obesity with type 2 diabetes (T2D), the effect of the former on the cost of drug treatment of the latest has not been specifically addressed. We studied the association of overweight/obesity on the cost of drug treatment of hyperglycemia, hypertension and dyslipidemia in a population with T2D. This observational study utilized data from the QUALIDIAB database on 3,099 T2D patients seen in Diabetes Centers in Argentina, Chile, Colombia, Peru, and Venezuela. Data were grouped according to body mass index (BMI) as Normal (18.5≤BMI<25), Overweight (25≤BMI<30), and Obese (BMI≥30). Thereafter, we assessed clinical and metabolic data and cost of drug treatment in each category. Statistical analyses included group comparisons for continuous variables (parametric or non-parametric tests), Chi-square tests for differences between proportions, and multivariable regression analysis to assess the association between BMI and monthly cost of drug treatment. Although all groups showed comparable degree of glycometabolic control (FBG, HbA1c), we found significant differences in other metabolic control indicators. Total cost of drug treatment of hyperglycemia and associated cardiovascular risk factors (CVRF) increased significantly (p<0.001) with increment of BMI. Hyperglycemia treatment cost showed a significant increase concordant with BMI whereas hypertension and dyslipidemia did not. Despite different values and percentages of increase, this growing cost profile was reproduced in every participating country. BMI significantly and independently affected hyperglycemia treatment cost. Our study shows for the first time that BMI significantly increases total expenditure on drugs for T2D and its associated CVRF treatment in Latin America.
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
Integrated-circuit balanced parametric amplifier
NASA Technical Reports Server (NTRS)
Dickens, L. E.
1975-01-01
Amplifier, fabricated on single dielectric substrate, has pair of Schottky barrier varactor diodes mounted on single semiconductor chip. Circuit includes microstrip transmission line and slot line section to conduct signals. Main features of amplifier are reduced noise output and low production cost.
Integral abutment bridges under thermal loading : numerical simulations and parametric study.
DOT National Transportation Integrated Search
2016-06-01
Integral abutment bridges (IABs) have become of interest due to their decreased construction and maintenance costs in : comparison to conventional jointed bridges. Most prior IAB research was related to substructure behavior, and, as a result, most :...
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Revisiting dark energy models using differential ages of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H {sub 0} {sub dt} / dz and its derivative ( H {sub 0} {sub d} {sup 2} {sup t} / dz {sup 2}) using a model independent technique called non-parametric smoothing . Basically, dt / dz is the change in the age of the object as a function of redshift whichmore » is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H ( z ) data. Further, we calculate H {sub 0} {sub dt} / dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H {sub 0} {sub dt} / dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H {sub 0} {sub dt} / dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z ≥ 0.3.« less
Parametric reduced models for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Parametric reduced models for the nonlinear Schrödinger equation.
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
Parametric study of helicopter aircraft systems costs and weights
NASA Technical Reports Server (NTRS)
Beltramo, M. N.
1980-01-01
Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.
Improving the Parametric Method of Cost Estimating Relationships of Naval Ships
2014-06-01
tool since the total cost of the ship is broken down into smaller parts as defined by the WBS. The Navy currently uses the Expanded Ship Work Breakdown...Includes boilers , reactors, turbines, gears, shafting, propellers, steam piping, lube oil piping, and radiation 300 Electric Plant Includes ship...spaces, ladders, storerooms, laundry, and workshops 700 Armament Includes guns, missile launchers, ammunition handling and stowage, torpedo tubes , depth
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
A parametric ribcage geometry model accounting for variations among the adult population.
Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen
2016-09-06
The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
A Computational Model of Multidimensional Shape
Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo
2010-01-01
We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668
Distributed support modelling for vertical track dynamic analysis
NASA Astrophysics Data System (ADS)
Blanco, B.; Alonso, A.; Kari, L.; Gil-Negrete, N.; Giménez, J. G.
2018-04-01
The finite length nature of rail-pad supports is characterised by a Timoshenko beam element formulation over an elastic foundation, giving rise to the distributed support element. The new element is integrated into a vertical track model, which is solved in frequency and time domain. The developed formulation is obtained by solving the governing equations of a Timoshenko beam for this particular case. The interaction between sleeper and rail via the elastic connection is considered in an analytical, compact and efficient way. The modelling technique results in realistic amplitudes of the 'pinned-pinned' vibration mode and, additionally, it leads to a smooth evolution of the contact force temporal response and to reduced amplitudes of the rail vertical oscillation, as compared to the results from concentrated support models. Simulations are performed for both parametric and sinusoidal roughness excitation. The model of support proposed here is compared with a previous finite length model developed by other authors, coming to the conclusion that the proposed model gives accurate results at a reduced computational cost.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Parametric Cost Study of AC-DC Wayside Power Systems
DOT National Transportation Integrated Search
1975-09-01
The wayside power system provides all the power requirements of an electric vehicle operating on a fixed guideway. For a given set of specifications there are numerous wayside power supply configurations which will be satisfactory from a technical st...
Parametric study of extended end-plate connection using finite element modeling
NASA Astrophysics Data System (ADS)
Mureşan, Ioana Cristina; Bâlc, Roxana
2017-07-01
End-plate connections with preloaded high strength bolts represent a convenient, fast and accurate solution for beam-to-column joints. The behavior of framework joints build up with this type of connection are sensitive dependent on geometrical and material characteristics of the elements connected. This paper presents results of parametric analyses on the behavior of a bolted extended end-plate connection using finite element modeling program Abaqus. This connection was experimentally tested in the Laboratory of Faculty of Civil Engineering from Cluj-Napoca and the results are briefly reviewed in this paper. The numerical model of the studied connection was described in detail in [1] and provides data for this parametric study.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D
2016-10-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
A generalized Jaynes-Cummings model: The relativistic parametric amplifier and a single trapped ion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda-Guillén, D., E-mail: dojedag@ipn.mx; Mota, R. D.; Granados, V. D.
2016-06-15
We introduce a generalization of the Jaynes-Cummings model and study some of its properties. We obtain the energy spectrum and eigenfunctions of this model by using the tilting transformation and the squeezed number states of the one-dimensional harmonic oscillator. As physical applications, we connect this new model to two important and novelty problems: the relativistic parametric amplifier and the quantum simulation of a single trapped ion.
Parametrization of Drag and Turbulence for Urban Neighbourhoods with Trees
NASA Astrophysics Data System (ADS)
Krayenhoff, E. S.; Santiago, J.-L.; Martilli, A.; Christen, A.; Oke, T. R.
2015-08-01
Urban canopy parametrizations designed to be coupled with mesoscale models must predict the integrated effect of urban obstacles on the flow at each height in the canopy. To assess these neighbourhood-scale effects, results of microscale simulations may be horizontally-averaged. Obstacle-resolving computational fluid dynamics (CFD) simulations of neutrally-stratified flow through canopies of blocks (buildings) with varying distributions and densities of porous media (tree foliage) are conducted, and the spatially-averaged impacts on the flow of these building-tree combinations are assessed. The accuracy with which a one-dimensional (column) model with a one-equation (-) turbulence scheme represents spatially-averaged CFD results is evaluated. Individual physical mechanisms by which trees and buildings affect flow in the column model are evaluated in terms of relative importance. For the treed urban configurations considered, effects of buildings and trees may be considered independently. Building drag coefficients and length scale effects need not be altered due to the presence of tree foliage; therefore, parametrization of spatially-averaged flow through urban neighbourhoods with trees is greatly simplified. The new parametrization includes only source and sink terms significant for the prediction of spatially-averaged flow profiles: momentum drag due to buildings and trees (and the associated wake production of turbulent kinetic energy), modification of length scales by buildings, and enhanced dissipation of turbulent kinetic energy due to the small scale of tree foliage elements. Coefficients for the Santiago and Martilli (Boundary-Layer Meteorol 137: 417-439, 2010) parametrization of building drag coefficients and length scales are revised. Inclusion of foliage terms from the new parametrization in addition to the Santiago and Martilli building terms reduces root-mean-square difference (RMSD) of the column model streamwise velocity component and turbulent kinetic energy relative to the CFD model by 89 % in the canopy and 71 % above the canopy on average for the highest leaf area density scenarios tested: . RMSD values with the new parametrization are less than 20 % of mean layer magnitude for the streamwise velocity component within and above the canopy, and for above-canopy turbulent kinetic energy; RMSD values for within-canopy turbulent kinetic energy are negligible for most scenarios. The foliage-related portion of the new parametrization is required for scenarios with tree foliage of equal or greater height than the buildings, and for scenarios with foliage below roof height for building plan area densities less than approximately 0.25.
Bim and Gis: when Parametric Modeling Meets Geospatial Data
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.
2017-12-01
Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.
Parametric excitation of tire-wheel assemblies by a stiffness non-uniformity
NASA Astrophysics Data System (ADS)
Stutts, D. S.; Krousgrill, C. M.; Soedel, W.
1995-01-01
A simple model of the effect of a concentrated radial stiffness non-uniformity in a passenger car tire is presented. The model treats the tread band of the tire as a rigid ring supported on a viscoelastic foundation. The distributed radial stiffness is lumped into equivalent horizontal (fore-and-aft) and vertical stiffnesses. The concentrated radial stiffness non-uniformity is modeled by treating the tread band as fixed, and the stiffness non-uniformity as rotating around it at the nominal angular velocity of the wheel. Due to loading, the center of mass of the tread band ring model is displaced upward with respect to the wheel spindle and, therefore, the rotating stiffness non-uniformity is alternately compressed and stretched through one complete rotation. This stretching and compressing of the stiffness non-uniformity results in force transmission to the wheel spindle at twice the nominal angular velocity in frequency, and therefore, would excite a given resonance at one-half the nominal angular wheel velocity that a mass unbalance would. The forcing produced by the stiffness non-uniformity is parametric in nature, thus creating the possibility of parametric resonance. The basic theory of the parametric resonance is explained, and a parameter study using derived lumped parameters based on a typical passenger car tire is performed. This study revealed that parametric resonance in passenger car tires, although possible, is unlikely at normal highway speeds as predicted by this model unless the tire is partially deflated.
Parametrically excited oscillation of stay cable and its control in cable-stayed bridges.
Sun, Bing-nan; Wang, Zhi-gang; Ko, J M; Ni, Y Q
2003-01-01
This paper presents a nonlinear dynamic model for simulation and analysis of a kind of parametrically excited vibration of stay cable caused by support motion in cable-stayed bridges. The sag, inclination angle of the stay cable are considered in the model, based on which, the oscillation mechanism and dynamic response characteristics of this kind of vibration are analyzed through numerical calculation. It is noted that parametrically excited oscillation of a stay cable with certain sag, inclination angle and initial static tension force may occur in cable-stayed bridges due to deck vibration under the condition that the natural frequency of a cable approaches to about half of the first model frequency of the bridge deck system. A new vibration control system installed on the cable anchorage is proposed as a possible damping system to suppress the cable parametric oscillation. The numerical calculation results showed that with the use of this damping system, the cable oscillation due to the vibration of the deck and/or towers will be considerably reduced.
Changing space and sound: Parametric design and variable acoustics
NASA Astrophysics Data System (ADS)
Norton, Christopher William
This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
NASA Astrophysics Data System (ADS)
Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.
2018-01-01
Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
A physiology-based parametric imaging method for FDG-PET data
NASA Astrophysics Data System (ADS)
Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele
2017-12-01
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup
2011-02-25
Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1
A Hybrid Wind-Farm Parametrization for Mesoscale and Climate Models
NASA Astrophysics Data System (ADS)
Pan, Yang; Archer, Cristina L.
2018-04-01
To better understand the potential impact of wind farms on weather and climate at the regional to global scales, a new hybrid wind-farm parametrization is proposed for mesoscale and climate models. The proposed parametrization is a hybrid model because it is not based on physical processes or conservation laws, but on the multiple linear regression of the results of large-eddy simulations (LES) with the geometric properties of the wind-farm layout (e.g., the blockage ratio and blockage distance). The innovative aspect is that each wind turbine is treated individually based on its position in the farm and on the wind direction by predicting the velocity upstream of each turbine. The turbine-induced forces and added turbulence kinetic energy (TKE) are first derived analytically and then implemented in the Weather Research and Forecasting model. Idealized simulations of the offshore Lillgrund wind farm are conducted. The wind-speed deficit and TKE predicted with the hybrid model are in excellent agreement with those from the LES results, while the wind-power production estimated with the hybrid model is within 10% of that observed. Three additional wind farms with larger inter-turbine spacing than at Lillgrund are also considered, and a similar agreement with LES results is found, proving that the hybrid parametrization works well with any wind farm regardless of the spacing between turbines. These results indicate the wind-turbine position, wind direction, and added TKE are essential in accounting for the wind-farm effects on the surroundings, for which the hybrid wind-farm parametrization is a promising tool.
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng
2018-03-01
In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.
Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan
2014-01-01
Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.
NASA Astrophysics Data System (ADS)
Sun, Z.; Cao, Y. K.
2015-08-01
The paper focuses on the versatility of data processing workflows ranging from BIM-based survey to structural analysis and reverse modeling. In China nowadays, a large number of historic architecture are in need of restoration, reinforcement and renovation. But the architects are not prepared for the conversion from the booming AEC industry to architectural preservation. As surveyors working with architects in such projects, we have to develop efficient low-cost digital survey workflow robust to various types of architecture, and to process the captured data for architects. Although laser scanning yields high accuracy in architectural heritage documentation and the workflow is quite straightforward, the cost and portability hinder it from being used in projects where budget and efficiency are of prime concern. We integrate Structure from Motion techniques with UAV and total station in data acquisition. The captured data is processed for various purposes illustrated with three case studies: the first one is as-built BIM for a historic building based on registered point clouds according to Ground Control Points; The second one concerns structural analysis for a damaged bridge using Finite Element Analysis software; The last one relates to parametric automated feature extraction from captured point clouds for reverse modeling and fabrication.
NASA Astrophysics Data System (ADS)
Bortolotti, P.; Adolphs, G.; Bottasso, C. L.
2016-09-01
This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.
Hamilton's rule and the causes of social evolution
Bourke, Andrew F. G.
2014-01-01
Hamilton's rule is a central theorem of inclusive fitness (kin selection) theory and predicts that social behaviour evolves under specific combinations of relatedness, benefit and cost. This review provides evidence for Hamilton's rule by presenting novel syntheses of results from two kinds of study in diverse taxa, including cooperatively breeding birds and mammals and eusocial insects. These are, first, studies that empirically parametrize Hamilton's rule in natural populations and, second, comparative phylogenetic analyses of the genetic, life-history and ecological correlates of sociality. Studies parametrizing Hamilton's rule are not rare and demonstrate quantitatively that (i) altruism (net loss of direct fitness) occurs even when sociality is facultative, (ii) in most cases, altruism is under positive selection via indirect fitness benefits that exceed direct fitness costs and (iii) social behaviour commonly generates indirect benefits by enhancing the productivity or survivorship of kin. Comparative phylogenetic analyses show that cooperative breeding and eusociality are promoted by (i) high relatedness and monogamy and, potentially, by (ii) life-history factors facilitating family structure and high benefits of helping and (iii) ecological factors generating low costs of social behaviour. Overall, the focal studies strongly confirm the predictions of Hamilton's rule regarding conditions for social evolution and their causes. PMID:24686934
Hamilton's rule and the causes of social evolution.
Bourke, Andrew F G
2014-05-19
Hamilton's rule is a central theorem of inclusive fitness (kin selection) theory and predicts that social behaviour evolves under specific combinations of relatedness, benefit and cost. This review provides evidence for Hamilton's rule by presenting novel syntheses of results from two kinds of study in diverse taxa, including cooperatively breeding birds and mammals and eusocial insects. These are, first, studies that empirically parametrize Hamilton's rule in natural populations and, second, comparative phylogenetic analyses of the genetic, life-history and ecological correlates of sociality. Studies parametrizing Hamilton's rule are not rare and demonstrate quantitatively that (i) altruism (net loss of direct fitness) occurs even when sociality is facultative, (ii) in most cases, altruism is under positive selection via indirect fitness benefits that exceed direct fitness costs and (iii) social behaviour commonly generates indirect benefits by enhancing the productivity or survivorship of kin. Comparative phylogenetic analyses show that cooperative breeding and eusociality are promoted by (i) high relatedness and monogamy and, potentially, by (ii) life-history factors facilitating family structure and high benefits of helping and (iii) ecological factors generating low costs of social behaviour. Overall, the focal studies strongly confirm the predictions of Hamilton's rule regarding conditions for social evolution and their causes.
Computer aided system for parametric design of combination die
NASA Astrophysics Data System (ADS)
Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.
2017-09-01
In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.
Optimal Design of Experiments by Combining Coarse and Fine Measurements
NASA Astrophysics Data System (ADS)
Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.
2017-11-01
In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.
Evaluation of a low-cost, 3D-printed model for bronchoscopy training.
Parotto, Matteo; Jiansen, Joshua Qua; AboTaiban, Ahmed; Ioukhova, Svetlana; Agzamov, Alisher; Cooper, Richard; O'Leary, Gerald; Meineri, Massimiliano
2017-01-01
Flexible bronchoscopy is a fundamental procedure in anaesthesia and critical care medicine. Although learning this procedure is a complex task, the use of simulation-based training provides significant advantages, such as enhanced patient safety. Access to bronchoscopy simulators may be limited in low-resource settings. We have developed a low-cost 3D-printed bronchoscopy training model. A parametric airway model was obtained from an online medical model repository and fabricated using a low-cost 3D printer. The participating physicians had no prior bronchoscopy experience. Participants received a 30-minute lecture on flexible bronchoscopy and were administered a 15-item pre-test questionnaire on bronchoscopy. Afterwards, participants were instructed to perform a series of predetermined bronchoscopy tasks on the 3D printed simulator on 4 consecutive occasions. The time needed to perform the tasks and the quality of task performance (identification of bronchial anatomy, technique, dexterity, lack of trauma) were recorded. Upon completion of the simulator tests, participants were administered the 15-item questionnaire (post-test) once again. Participant satisfaction data on the perceived usefulness and accuracy of the 3D model were collected. A statistical analysis was performed using the t-test. Data are reported as mean values (± standard deviation). The time needed to complete all tasks was 152.9 ± 71.5 sec on the 1st attempt vs. 98.7 ± 40.3 sec on the 4th attempt (P = 0.03). Likewise, the quality of performance score improved from 8.3 ± 6.7 to 18.2 ± 2.5 (P < 0.0001). The average number of correct answers in the questionnaire was 6.8 ± 1.9 pre-test and 13.3 ± 3.1 post-test (P < 0.0001). Participants reported a high level of satisfaction with the perceived usefulness and accuracy of the model. We developed a 3D-printed model for bronchoscopy training. This model improved trainee performance and may represent a valid, low-cost bronchoscopy training tool.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
Business intelligence modeling in launch operations
NASA Astrophysics Data System (ADS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-05-01
The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations, and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems.
Business Intelligence Modeling in Launch Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-01-01
This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems. The future of business intelligence of space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems.
Visual Literacy and the Integration of Parametric Modeling in the Problem-Based Curriculum
ERIC Educational Resources Information Center
Assenmacher, Matthew Benedict
2013-01-01
This quasi-experimental study investigated the application of visual literacy skills in the form of parametric modeling software in relation to traditional forms of sketching. The study included two groups of high school technical design students. The control and experimental groups involved in the study consisted of two randomly selected groups…
NASA Technical Reports Server (NTRS)
Shaw, Eric J.
2001-01-01
This paper will report on the activities of the IAA Launcher Systems Economics Working Group in preparations for its Launcher Systems Development Cost Behavior Study. The Study goals include: improve launcher system and other space system parametric cost analysis accuracy; improve launcher system and other space system cost analysis credibility; and provide launcher system and technology development program managers and other decisionmakers with useful information on development cost impacts of their decisions. The Working Group plans to explore at least the following five areas in the Study: define and explain development cost behavior terms and concepts for use in the Study; identify and quantify sources of development cost and cost estimating uncertainty; identify and quantify significant influences on development cost behavior; identify common barriers to development cost understanding and reduction; and recommend practical, realistic strategies to accomplish reductions in launcher system development cost.
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
NASA Astrophysics Data System (ADS)
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Comparison of radiation parametrizations within the HARMONIE-AROME NWP model
NASA Astrophysics Data System (ADS)
Rontu, Laura; Lindfors, Anders V.
2018-05-01
Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.
Latest astronomical constraints on some non-linear parametric dark energy models
NASA Astrophysics Data System (ADS)
Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos
2018-04-01
We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.
NASA Technical Reports Server (NTRS)
Coverse, G. L.
1984-01-01
A turbine modeling technique has been developed which will enable the user to obtain consistent and rapid off-design performance from design point input. This technique is applicable to both axial and radial flow turbine with flow sizes ranging from about one pound per second to several hundred pounds per second. The axial flow turbines may or may not include variable geometry in the first stage nozzle. A user-specified option will also permit the calculation of design point cooling flow levels and corresponding changes in efficiency for the axial flow turbines. The modeling technique has been incorporated into a time-sharing program in order to facilitate its use. Because this report contains a description of the input output data, values of typical inputs, and example cases, it is suitable as a user's manual. This report is the second of a three volume set. The titles of the three volumes are as follows: (1) Volume 1 CMGEN USER's Manual (Parametric Compressor Generator); (2) Volume 2 PART USER's Manual (Parametric Turbine); (3) Volume 3 MODFAN USER's Manual (Parametric Modulation Flow Fan).
Prospects for reduced energy transports: A preliminary analysis
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.
1974-01-01
The recent energy crisis and subsequent substantial increase in fuel prices have provided increased incentive to reduce the fuel consumption of civil transport aircraft. At the present time many changes in operational procedures have been introduced to decrease fuel consumption of the existing fleet. In the future, however, it may become desirable or even necessary to introduce new fuel-conservative aircraft designs. This paper reports the results of a preliminary study of new near-term fuel conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the optimum configuration characteristics and on economic performance. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a nominal reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It has about 30% less fuel consumption on a seat-mile basis.
Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.
Ingalls, Brian; Mincheva, Maya; Roussel, Marc R
2017-07-01
A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.
Parametric Study of Reactive Melt Infiltration
NASA Technical Reports Server (NTRS)
Nelson, Emily S.; Colella, Phillip
2000-01-01
Reactive melt infiltration is viewed as a promising means of achieving near-net shape manufacturing with quick processing time and at low cost. Since the reactants and products are, in general, of varying density, overall conservation of mass dictates that there is a force related to chemical conversion which can directly influence infiltration behavior. In effect, the driving pressure forces may compete with the forces from chemical conversion, affecting the advancement of the front. We have developed a two-dimensional numerical code to examine these effects, using reaction-formed silicon carbide as a model system for this process. We have examined a range of initial porosities, pore radii, and reaction rates in order to investigate their effects on infiltration dynamics.
Using Survival Analysis to Improve Estimates of Life Year Gains in Policy Evaluations.
Meacock, Rachel; Sutton, Matt; Kristensen, Søren Rud; Harrison, Mark
2017-05-01
Policy evaluations taking a lifetime horizon have converted estimated changes in short-term mortality to expected life year gains using general population life expectancy. However, the life expectancy of the affected patients may differ from the general population. In trials, survival models are commonly used to extrapolate life year gains. The objective was to demonstrate the feasibility and materiality of using parametric survival models to extrapolate future survival in health care policy evaluations. We used our previous cost-effectiveness analysis of a pay-for-performance program as a motivating example. We first used the cohort of patients admitted prior to the program to compare 3 methods for estimating remaining life expectancy. We then used a difference-in-differences framework to estimate the life year gains associated with the program using general population life expectancy and survival models. Patient-level data from Hospital Episode Statistics was utilized for patients admitted to hospitals in England for pneumonia between 1 April 2007 and 31 March 2008 and between 1 April 2009 and 31 March 2010, and linked to death records for the period from 1 April 2007 to 31 March 2011. In our cohort of patients, using parametric survival models rather than general population life expectancy figures reduced the estimated mean life years remaining by 30% (9.19 v. 13.15 years, respectively). However, the estimated mean life year gains associated with the program are larger using survival models (0.380 years) compared to using general population life expectancy (0.154 years). Using general population life expectancy to estimate the impact of health care policies can overestimate life expectancy but underestimate the impact of policies on life year gains. Using a longer follow-up period improved the accuracy of estimated survival and program impact considerably.
Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav
2013-10-28
We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Experimentally validated 3D MD model for AFM-based tip-based nanomanufacturing
NASA Astrophysics Data System (ADS)
Promyoo, Rapeepan
In order to control AFM-based TBN to produce precise nano-geometry efficiently, there is a need to conduct a more focused study of the effects of different parameters, such as feed, speed, and depth of cut on the process performance and outcome. This is achieved by experimentally validating a MD simulation model of nanomachining, and using it to conduct parametric studies to guide AFM-based TBN. A 3D MD model with a larger domain size was developed and used to gain a unique insight into the nanoindentation and nanoscratching processes such as the effect of tip speed (e.g. effect of tip speed on indentation force above 10 nm of indentation depth). The model also supported a more comprehensive parametric study (than other published work) in terms of number of parameters and ranges of values investigated, as well as a more cost effective design of experiments. The model was also used to predict material properties at the nanoscale (e.g. hardness of gold predicted within 6% error). On the other hand, a comprehensive experimental parametric study was conducted to produce a database that is used to select proper machining conditions for guiding the fabrication of nanochannels (e.g. scratch rate = 0.996 Hz, trigger threshold = 1 V, for achieving a nanochannel depth = 50 nm for the case of gold device). Similar trends for the variation of indentation force with depth of cut, pattern of the material pile-up around the indentation mark or scratched groove were found. The parametric studies conducted using both MD model simulations and AFM experiments showed the following: Normal forces for both nanoindentation and nanoscratching increase as the depth of cut increases. The indentation depth increases with tip speed, but the depth of scratch decrease with increasing tip speed. The width and depth of scratched groove also depend on the scratch angle. The recommended scratch angle is at 90°. The surface roughness increases with step over, especially when the step over is larger than the tip radius. The depth of cut also increases as the step over decreases. Additional study is conducted using the MD model to understand the effect of crystal structure and defects in material when subjected to a stress. Several types of defects, including vacancies and Shockley partial dislocation loops, can be observed during the MD simulation for the case of gold, copper and aluminum. Finally, AFM-based TBN is used with photolithography to fabricate a nano-fluidic device for medical application. In fact, the photolithography process is used to create microchannels on top of a silicon wafer, and AFM-based TBN is applied to fabricate nanochannels between the microchannels that connect to the reservoirs. Fluid flow test was conducted on the devices to ensure that the nanochannel was open and the bonding sealed.
Air Brayton Solar Receiver, phase 1
NASA Technical Reports Server (NTRS)
Zimmerman, D. K.
1979-01-01
A six month analysis and conceptual design study of an open cycle Air Brayton Solar Receiver (ABSR) for use on a tracking, parabolic solar concentrator are discussed. The ABSR, which includes a buffer storage system, is designed to provide inlet air to a power conversion unit. Parametric analyses, conceptual design, interface requirements, and production cost estimates are described. The design features were optimized to yield a zero maintenance, low cost, high efficiency concept that will provide a 30 year operational life.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
Parametric model of the scala tympani for haptic-rendered cochlear implantation.
Todd, Catherine; Naghdy, Fazel
2005-01-01
A parametric model of the human scala tympani has been designed for use in a haptic-rendered computer simulation of cochlear implant surgery. It will be the first surgical simulator of this kind. A geometric model of the Scala Tympani has been derived from measured data for this purpose. The model is compared with two existing descriptions of the cochlear spiral. A first approximation of the basilar membrane is also produced. The structures are imported into a force-rendering software application for system development.
Parametric Modeling in the CAE Process: Creating a Family of Models
NASA Technical Reports Server (NTRS)
Brown, Christopher J.
2011-01-01
This Presentation meant as an example - Give ideas of approaches to use - The significant benefit of PARAMETRIC geometry based modeling The importance of planning before you build Showcase some NX capabilities - Mesh Controls - Associativity - Divide Face - Offset Surface Reminder - This only had to be done once! - Can be used for any cabinet in that "family" Saves a lot of time if pre-planned Allows re-use in the future
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
ERIC Educational Resources Information Center
Steinhauer, H. M.
2012-01-01
Engineering graphics has historically been viewed as a challenging course to teach as students struggle to grasp and understand the fundamental concepts and then to master their proper application. The emergence of stable, fast, affordable 3D parametric modeling platforms such as CATIA, Pro-E, and AutoCAD while providing several pedagogical…
Quantum Treatment of Two Coupled Oscillators in Interaction with a Two-Level Atom:
NASA Astrophysics Data System (ADS)
Khalil, E. M.; Abdalla, M. Sebawe; Obada, A. S.-F.
In this communication we handle a modified model representing the interaction between a two-level atom and two modes of the electromagnetic field in a cavity. The interaction between the modes is assumed to be of a parametric amplifier type. The model consists of two different systems, one represents the Jaynes-Cummings model (atom-field interaction) and the other represents the two mode parametric amplifier model (field-field interaction). After some canonical transformations the constants of the motion have been obtained and used to derive the time evolution operator. The wave function in the Schrödinger picture is constructed and employed to discuss some statistical properties related to the system. Further discussion related to the statistical properties of some physical quantities is given where we have taken into account an initial correlated pair-coherent state for the modes. We concentrate in our examination on the system behavior that occurred as a result of the variation of the parametric amplifier coupling parameter as well as the detuning parameter. It has been shown that the interaction of the parametric amplifier term increases the revival period and consequently longer period of strong interaction between the atom and the fields.
Tsamados, Michel; Feltham, Daniel; Petty, Alek; Schroeder, David; Flocco, Daniela
2015-10-13
We present a modelling study of processes controlling the summer melt of the Arctic sea ice cover. We perform a sensitivity study and focus our interest on the thermodynamics at the ice-atmosphere and ice-ocean interfaces. We use the Los Alamos community sea ice model CICE, and additionally implement and test three new parametrization schemes: (i) a prognostic mixed layer; (ii) a three equation boundary condition for the salt and heat flux at the ice-ocean interface; and (iii) a new lateral melt parametrization. Recent additions to the CICE model are also tested, including explicit melt ponds, a form drag parametrization and a halodynamic brine drainage scheme. The various sea ice parametrizations tested in this sensitivity study introduce a wide spread in the simulated sea ice characteristics. For each simulation, the total melt is decomposed into its surface, bottom and lateral melt components to assess the processes driving melt and how this varies regionally and temporally. Because this study quantifies the relative importance of several processes in driving the summer melt of sea ice, this work can serve as a guide for future research priorities. © 2015 The Author(s).
Effect of Monovalent Ion Parameters on Molecular Dynamics Simulations of G-Quadruplexes.
Havrila, Marek; Stadlbauer, Petr; Islam, Barira; Otyepka, Michal; Šponer, Jiří
2017-08-08
G-quadruplexes (GQs) are key noncanonical DNA and RNA architectures stabilized by desolvated monovalent cations present in their central channels. We analyze extended atomistic molecular dynamics simulations (∼580 μs in total) of GQs with 11 monovalent cation parametrizations, assessing GQ overall structural stability, dynamics of internal cations, and distortions of the G-tetrad geometries. Majority of simulations were executed with the SPC/E water model; however, test simulations with TIP3P and OPC water models are also reported. The identity and parametrization of ions strongly affect behavior of a tetramolecular d[GGG] 4 GQ, which is unstable with several ion parametrizations. The remaining studied RNA and DNA GQs are structurally stable, though the G-tetrad geometries are always deformed by bifurcated H-bonding in a parametrization-specific manner. Thus, basic 10-μs-scale simulations of fully folded GQs can be safely done with a number of cation parametrizations. However, there are parametrization-specific differences and basic force-field errors affecting the quantitative description of ion-tetrad interactions, which may significantly affect studies of the ion-binding processes and description of the GQ folding landscape. Our d[GGG] 4 simulations indirectly suggest that such studies will also be sensitive to the water models. During exchanges with bulk water, the Na + ions move inside the GQs in a concerted manner, while larger relocations of the K + ions are typically separated. We suggest that the Joung-Cheatham SPC/E K + parameters represent a safe choice in simulation studies of GQs, though variation of ion parameters can be used for specific simulation goals.
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
NASA Technical Reports Server (NTRS)
Bradford, D. F.; Kelejian, H. H.; Brusch, R.; Gross, J.; Fishman, H.; Feenberg, D.
1974-01-01
The value of improving information for forecasting future crop harvests was investigated. Emphasis was placed upon establishing practical evaluation procedures firmly based in economic theory. The analysis was applied to the case of U.S. domestic wheat consumption. Estimates for a cost of storage function and a demand function for wheat were calculated. A model of market determinations of wheat inventories was developed for inventory adjustment. The carry-over horizon is computed by the solution of a nonlinear programming problem, and related variables such as spot and future price at each stage are determined. The model is adaptable to other markets. Results are shown to depend critically on the accuracy of current and proposed measurement techniques. The quantitative results are presented parametrically, in terms of various possible values of current and future accuracies.
Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz
2016-08-01
Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.
Chaotic Lagrangian models for turbulent relative dispersion.
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
Chaotic Lagrangian models for turbulent relative dispersion
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Vulpiani, Angelo
2017-04-01
A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Volterra model of the parametric array loudspeaker operating at ultrasonic frequencies.
Shi, Chuang; Kajikawa, Yoshinobu
2016-11-01
The parametric array loudspeaker (PAL) is an application of the parametric acoustic array in air, which can be applied to transmit a narrow audio beam from an ultrasonic emitter. However, nonlinear distortion is very perceptible in the audio beam. Modulation methods to reduce the nonlinear distortion are available for on-axis far-field applications. For other applications, preprocessing techniques are wanting. In order to develop a preprocessing technique with general applicability to a wide range of operating conditions, the Volterra filter is investigated as a nonlinear model of the PAL in this paper. Limitations of the standard audio-to-audio Volterra filter are elaborated. An improved ultrasound-to-ultrasound Volterra filter is proposed and empirically demonstrated to be a more generic Volterra model of the PAL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giammichele, N.; Fontaine, G.; Brassard, P.
We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akimamore » splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.« less
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
NASA Astrophysics Data System (ADS)
Giammichele, N.; Charpinet, S.; Fontaine, G.; Brassard, P.
2017-01-01
We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akima splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.
Wolf, Elizabeth Skubak; Anderson, David F
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
Cosmological implications of quantum mechanics parametrization of dark energy
NASA Astrophysics Data System (ADS)
Szydłowski, Marek; Stachowski, Aleksander; Urbanowski, Krzysztof
2017-08-01
We consider the cosmology with the running dark energy. The parametrization of dark energy is derived from the quantum process of transition from the false vacuum state to the true vacuum state. This model is the generalized interacting CDM model. We consider the energy density of dark energy parametrization, which is given by the Breit-Wigner energy distribution function. The idea of the process of the quantum mechanical decay of unstable states was formulated by Krauss and Dent. We used this idea in our considerations. In this model is an energy transfer in the dark sector. In this evolutional scenario the universe starts from the false vacuum state and goes to the true vacuum state of the present day universe. The intermediate regime during the passage from false to true vacuum states takes place. In this way the cosmological constant problem can be tried to solve. We estimate the cosmological parameters for this model. This model is in a good agreement with the astronomical data and is practically indistinguishable from CDM model.
Möller, Jörgen; Nicklasson, Lars; Murthy, Ananthram
2011-01-01
To estimate the cost-effectiveness (cost per additional life-year [LY] and quality-adjusted life-year [QALY] gained) of lenalidomide plus dexamethasone (LEN/DEX) compared with bortezomib for the treatment of relapsed-refractory multiple myeloma (rrMM) in Norway. A discrete-event simulation model was developed to predict patients? disease course using patient data, best response, and efficacy levels obtained from LEN/DEX MM-009/-010 trials and the bortezomib (APEX) published clinical trial. Predictive equations for time-to-progression (TTP) and post-progression survival (PPS) were developed by identifying the best fitting parametric survival distributions and selecting the most significant predictors. Disease and adverse event management was obtained via survey from Norwegian experts. Costs, derived from official Norwegian pricing data bases, included drug, administration, monitoring, and adverse event management costs. Complete or partial responders were 65% for LEN/DEX compared to 43% for bortezomib. Derived median TTP was 11.45 months for LEN/DEX compared to 5.15 months for bortezomib. LYs and QALYs were higher for LEN/DEX (4.06 and 2.95, respectively) than for bortezomib (3.11 and 2.19, respectively). The incremental costs per QALY and LY gained from LEN/DEX were NOK 247,978 and NOK 198,714, respectively, compared to bortezomib. Multiple sensitivity analyses indicated the findings were stable. The parameters with the greatest impact were 4-year time horizon (NOK 441,457/QALY) and higher bound confidence intervals for PPS (NOK 118,392). The model analyzed two therapies not compared in head-to-head trials, and predicted results using an equation incorporating patient-level characteristics. It is a limited estimation of the costs and outcomes in a Norwegian setting. The simulation model showed that treatment with LEN/DEX leads to greater LYs and QALYs when compared to bortezomib in the treatment of rrMM patients. The incremental cost-effectiveness ratio indicated treatment with LEN/DEX to be cost-effective and was the basis of the reimbursement approval of LEN/DEX in Norway.
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Connock, Martin; Hyde, Chris; Moore, David
2011-10-01
The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When trials have survival curves with long tails exhibiting few events then the robustness of extrapolations using information in such tails should be tested.
Classification of Company Performance using Weighted Probabilistic Neural Network
NASA Astrophysics Data System (ADS)
Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi
2018-05-01
Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.
Optimal Energy Management for Microgrids
NASA Astrophysics Data System (ADS)
Zhao, Zheng
Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.
NASA Astrophysics Data System (ADS)
Wu, Bifen; Zhao, Xinyu
2018-06-01
The effects of radiation of water mists in a fire-inspired environment are numerically investigated for different complexities of radiative media in a three-dimensional cubic enclosure. A Monte Carlo ray tracing (MCRT) method is employed to solve the radiative transfer equation (RTE). The anisotropic scattering behaviors of water mists are modeled by a combination of the Mie theory and the Henyey-Greestein relation. A tabulation method considering the size and wavelength dependencies is established for water droplets, to reduce the computational cost associated with the evaluation of the nongray spectral properties of water mists. Validation and verification of the coupled MCRT solver are performed using a one-dimensional slab with gray gas in comparison with the analytical solutions. Parametric studies are then performed using a three-dimensional cubic box to examine radiation of two monodispersed and one polydispersed water mist systems. The tabulation method can reduce the computational cost by a factor of one hundred. Results obtained without any scattering model better conform with results obtained from the anisotropic model than the isotropic scattering model, when a highly directional emissive source is applied. For isotropic emissive sources, isotropic and anisotropic scattering models predict comparable results. The addition of different volume fractions of soot shows that soot may have a negative impact on the effectiveness of water mists in absorbing radiation when its volume fraction exceeds certain threshold.
Parameterization models for pesticide exposure via crop consumption.
Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier
2012-12-04
An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.
Direct health care costs associated with obesity in Chinese population in 2011.
Shi, Jingcheng; Wang, Yao; Cheng, Wenwei; Shao, Hui; Shi, Lizheng
2017-03-01
Overweight and obesity are established major risk factors for type 2 diabetes, and major public health concerns in China. This study aims to assess the economic burden associated with overweight and obesity in the Chinese population ages 45 and older. The Chinese Health and Retirement Longitudinal Study (CHARLS) in 2011 included 13,323 respondents of ages 45 and older living in 450 rural and urban communities across China. Demographic information, height, weight, direct health care costs for outpatient visits, hospitalization, and medications for self-care were extracted from the CHARLS database. Health Care costs were calculated in 2011 Chinese currency. The body mass index (BMI) was used to categorize underweight, normal weight, overweight, and obese populations. Descriptive analyses and a two-part regression model were performed to investigate the association of BMI with health care costs. To account for non-normality of the cost data, we applied a non-parametric bootstrap approach using the percentile method to estimate the 95% confidence intervals (95% CIs). Overweight and obese groups had significantly higher total direct health care costs (RMB 2246.4, RMB 2050.7, respectively) as compared with the normal-weight group (RMB 1886.0). When controlling for demographic characteristics, overweight and obese adults were 15.0% and 35.9% more likely to incur total health care costs, and obese individuals had 14.2% higher total health care costs compared with the normal-weight group. Compared with the normal-weight counterparts, the annual total direct health care costs were significantly higher among obese adults in China. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhu, Xiang; Zhang, Dianwen
2013-01-01
We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785
Miller, S W; Dennis, R G
1996-12-01
A parametric model was developed to describe the relationship between muscle moment arm and joint angle. The model was applied to the dorsiflexor muscle group in mice, for which the moment arm was determined as a function of ankle angle. The moment arm was calculated from the torque measured about the ankle upon application of a known force along the line of action of the dorsiflexor muscle group. The dependence of the dorsiflexor moment arm on ankle angle was modeled as r = R sin(a + delta), where r is the moment arm calculated from the measured torque and a is the joint angle. A least-squares curve fit yielded values for R, the maximum moment arm, and delta, the angle at which the maximum moment arm occurs as offset from 90 degrees. Parametric models were developed for two strains of mice, and no differences were found between the moment arms determined for each strain. Values for the maximum moment arm, R, for the two different strains were 0.99 and 1.14 mm, in agreement with the limited data available from the literature. While in some cases moment arm data may be better fitted by a polynomial, use of the parametric model provides a moment arm relationship with meaningful anatomical constants, allowing for the direct comparison of moment arm characteristics between different strains and species.
Assessing the agricultural costs of climate change: Combining results from crop and economic models
NASA Astrophysics Data System (ADS)
Howitt, R. E.
2016-12-01
Any perturbation to a resource system used by humans elicits both technical and behavioral changes. For agricultural production, economic criteria and their associated models are usually good predictors of human behavior in agricultural production. Estimation of the agricultural costs of climate change requires careful downscaling of global climate models to the level of agricultural regions. Plant growth models for the dominant crops are required to accurately show the full range of trade-offs and adaptation mechanisms needed to minimize the cost of climate change. Faced with the shifts in the fundamental resource base of agriculture, human behavior can either exacerbate or offset the impact of climate change on agriculture. In addition, agriculture can be an important source of increased carbon sequestration. However the effectiveness and timing of this sequestration depends on agricultural practices and farmer behavior. Plant growth models and economic models have been shown to interact in two broad fashions. First there is the direct embedding of a parametric representation plant growth simulations in the economic model production function. A second and more general approach is to have plant growth and crop process models interact with economic models as they are simulated. The development of more general wrapper programs that transfer information between models rapidly and efficiently will encourage this approach. However, this method does introduce complications in terms of matching up disparate scales both in time and space between models. Another characteristic behavioral response of agricultural production is the distinction between the intensive margin which considers the quantity of resource, for example fertilizer, used for a given crop, and the extensive margin of adjustment that measures how farmers will adjust their crop proportions in response to climate change. Ideally economic models will measure the response to both these margins of adjustment simultaneously. The paper will briefly discuss some examples of the direct embedding of results from plant growth models in economic models.
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hawk, J. D.
1975-01-01
A generalized concept for cost-effective structural design is introduced. It is assumed that decisions affecting the cost effectiveness of aerospace structures fall into three basic categories: design, verification, and operation. Within these basic categories, certain decisions concerning items such as design configuration, safety factors, testing methods, and operational constraints are to be made. All or some of the variables affecting these decisions may be treated probabilistically. Bayesian statistical decision theory is used as the tool for determining the cost optimum decisions. A special case of the general problem is derived herein, and some very useful parametric curves are developed and applied to several sample structures.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
Coal-Fired Boilers at Navy Bases, Navy Energy Guidance Study, Phase II and III.
1979-05-01
several sizes were performed. Central plants containing four equal-sized boilers and central flue gas desulfurization facilities were shown to be less...Conceptual design and parametric cost studies of steam and power generation systems using coal-fired stoker boilers and stack gas scrubbers in
System Advisor Model, SAM 2011.12.2: General Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilman, P.; Dobos, A.
2012-02-01
This document describes the capabilities of the U.S. Department of Energy and National Renewable Energy Laboratory's System Advisor Model (SAM), Version 2011.12.2, released on December 2, 2011. SAM is software that models the cost and performance of renewable energy systems. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of solar, wind, geothermal, biomass, and conventional power systems. The financial model can represent financing structures for projects that either buy and sell electricity at retail ratesmore » (residential and commercial) or sell electricity at a price determined in a power purchase agreement (utility). Advanced analysis options facilitate parametric, sensitivity, and statistical analyses, and allow for interfacing SAM with Microsoft Excel or with other computer programs. SAM is available as a free download at http://sam.nrel.gov. Technical support and more information about the software are available on the website.« less
Vehicle Sketch Pad: a Parametric Geometry Modeler for Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Hahn, Andrew S.
2010-01-01
The conceptual aircraft designer is faced with a dilemma, how to strike the best balance between productivity and fidelity? Historically, handbook methods have required only the coarsest of geometric parameterizations in order to perform analysis. Increasingly, there has been a drive to upgrade analysis methods, but these require considerably more precise and detailed geometry. Attempts have been made to use computer-aided design packages to fill this void, but their cost and steep learning curve have made them unwieldy at best. Vehicle Sketch Pad (VSP) has been developed over several years to better fill this void. While no substitute for the full feature set of computer-aided design packages, VSP allows even novices to quickly become proficient in defining three-dimensional, watertight aircraft geometries that are adequate for producing multi-disciplinary meta-models for higher order analysis methods, wind tunnel and display models, as well as a starting point for animation models. This paper will give an overview of the development and future course of VSP.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Parametric FEM for geometric biomembranes
NASA Astrophysics Data System (ADS)
Bonito, Andrea; Nochetto, Ricardo H.; Sebastian Pauletti, M.
2010-05-01
We consider geometric biomembranes governed by an L2-gradient flow for bending energy subject to area and volume constraints (Helfrich model). We give a concise derivation of a novel vector formulation, based on shape differential calculus, and corresponding discretization via parametric FEM using quadratic isoparametric elements and a semi-implicit Euler method. We document the performance of the new parametric FEM with a number of simulations leading to dumbbell, red blood cell and toroidal equilibrium shapes while exhibiting large deformations.
Failure Time Distributions: Estimates and Asymptotic Results.
1980-01-01
of the models. A parametric family of distributions is proposed for approximating life distri- butions whose hazard rate is bath-tub shaped, this...of the limiting dirtributions of the models. A parametric family of distributions is proposed for approximating life distribution~s whose hazard rate...12. always justified. But, because of this gener- ality, the possible limit laws for the maximum form a very large family . The
ERIC Educational Resources Information Center
Reise, Steven P.; Meijer, Rob R.; Ainsworth, Andrew T.; Morales, Leo S.; Hays, Ron D.
2006-01-01
Group-level parametric and non-parametric item response theory models were applied to the Consumer Assessment of Healthcare Providers and Systems (CAHPS[R]) 2.0 core items in a sample of 35,572 Medicaid recipients nested within 131 health plans. Results indicated that CAHPS responses are dominated by within health plan variation, and only weakly…
ERIC Educational Resources Information Center
Rojano, Teresa; García-Campos, Montserrat
2017-01-01
This article reports the outcomes of a study that seeks to investigate the role of feedback, by way of an intelligent support system in natural language, in parametrized modelling activities carried out by a group of tertiary education students. With such a system, it is possible to simultaneously display on a computer screen a dialogue window and…
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wave Attenuation and Gas Exchange Velocity in Marginal Sea Ice Zone
NASA Astrophysics Data System (ADS)
Bigdeli, A.; Hara, T.; Loose, B.; Nguyen, A. T.
2018-03-01
The gas transfer velocity in marginal sea ice zones exerts a strong control on the input of anthropogenic gases into the ocean interior. In this study, a sea state-dependent gas exchange parametric model is developed based on the turbulent kinetic energy dissipation rate. The model is tuned to match the conventional gas exchange parametrization in fetch-unlimited, fully developed seas. Next, fetch limitation is introduced in the model and results are compared to fetch limited experiments in lakes, showing that the model captures the effects of finite fetch on gas exchange with good fidelity. Having validated the results in fetch limited waters such as lakes, the model is next applied in sea ice zones using an empirical relation between the sea ice cover and the effective fetch, while accounting for the sea ice motion effect that is unique to sea ice zones. The model results compare favorably with the available field measurements. Applying this parametric model to a regional Arctic numerical model, it is shown that, under the present conditions, gas flux into the Arctic Ocean may be overestimated by 10% if a conventional parameterization is used.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Long-range parametric amplification of THz wave with absorption loss exceeding parametric gain.
Wang, Tsong-Dong; Huang, Yen-Chieh; Chuang, Ming-Yun; Lin, Yen-Hou; Lee, Ching-Han; Lin, Yen-Yin; Lin, Fan-Yi; Kitaeva, Galiya Kh
2013-01-28
Optical parametric mixing is a popular scheme to generate an idler wave at THz frequencies, although the THz wave is often absorbing in the nonlinear optical material. It is widely suggested that the useful material length for co-directional parametric mixing with strong THz-wave absorption is comparable to the THz-wave absorption length in the material. Here we show that, even in the limit of the absorption loss exceeding parametric gain, the THz idler wave can grows monotonically from optical parametric amplification over a much longer distance in a nonlinear optical material until pump depletion. The coherent production of the non-absorbing signal wave can assist the growth of the highly absorbing idler wave. We also show that, for the case of an equal input pump and signal in difference frequency generation, the quick saturation of the THz idler wave predicted from a much simplified and yet popular plane-wave model fails when fast diffraction of the THz wave from the co-propagating optical mixing waves is considered.
Grating lobe elimination in steerable parametric loudspeaker.
Shi, Chuang; Gan, Woon-Seng
2011-02-01
In the past two decades, the majority of research on the parametric loudspeaker has concentrated on the nonlinear modeling of acoustic propagation and pre-processing techniques to reduce nonlinear distortion in sound reproduction. There are, however, very few studies on directivity control of the parametric loudspeaker. In this paper, we propose an equivalent circular Gaussian source array that approximates the directivity characteristics of the linear ultrasonic transducer array. By using this approximation, the directivity of the sound beam from the parametric loudspeaker can be predicted by the product directivity principle. New theoretical results, which are verified through measurements, are presented to show the effectiveness of the delay-and-sum beamsteering structure for the parametric loudspeaker. Unlike the conventional loudspeaker array, where the spacing between array elements must be less than half the wavelength to avoid spatial aliasing, the parametric loudspeaker can take advantage of grating lobe elimination to extend the spacing of ultrasonic transducer array to more than 1.5 wavelengths in a typical application.
Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion
NASA Astrophysics Data System (ADS)
Panza, G. F.; Panza, G. F.; Romanelli, F.
2001-12-01
In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Hirschi, M.; Spirig, C.
2014-12-01
To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Lindholm, C; Gustavsson, A; Jönsson, L; Wimo, A
2013-05-01
Because the prevalence of many brain disorders rises with age, and brain disorders are costly, the economic burden of brain disorders will increase markedly during the next decades. The purpose of this study is to analyze how the costs to society vary with different levels of functioning and with the presence of a brain disorder. Resource utilization and costs from a societal viewpoint were analyzed versus cognition, activities of daily living (ADL), instrumental activities of daily living (IADL), brain disorder diagnosis and age in a population-based cohort of people aged 65 years and older in Nordanstig in Northern Sweden. Descriptive statistics, non-parametric bootstrapping and a generalized linear model (GLM) were used for the statistical analyses. Most people were zero users of care. Societal costs of dementia were by far the highest, ranging from SEK 262,000 (mild) to SEK 519,000 per year (severe dementia). In univariate analysis, all measures of functioning were significantly related to costs. When controlling for ADL and IADL in the multivariate GLM, cognition did not have a statistically significant effect on total cost. The presence of a brain disorder did not impact total cost when controlling for function. The greatest shift in costs was seen when comparing no dependency in ADL and dependency in one basic ADL function. It is the level of functioning, rather than the presence of a brain disorder diagnosis, which predicts costs. ADLs are better explanatory variables of costs than Mini mental state examination. Most people in a population-based cohort are zero users of care. Copyright © 2012 John Wiley & Sons, Ltd.
Ieva, Francesca; Jackson, Christopher H; Sharples, Linda D
2017-06-01
In chronic diseases like heart failure (HF), the disease course and associated clinical event histories for the patient population vary widely. To improve understanding of the prognosis of patients and enable health care providers to assess and manage resources, we wish to jointly model disease progression, mortality and their relation with patient characteristics. We show how episodes of hospitalisation for disease-related events, obtained from administrative data, can be used as a surrogate for disease status. We propose flexible multi-state models for serial hospital admissions and death in HF patients, that are able to accommodate important features of disease progression, such as multiple ordered events and competing risks. Fully parametric and semi-parametric semi-Markov models are implemented using freely available software in R. The models were applied to a dataset from the administrative data bank of the Lombardia region in Northern Italy, which included 15,298 patients who had a first hospitalisation ending in 2006 and 4 years of follow-up thereafter. This provided estimates of the associations of age and gender with rates of hospital admission and length of stay in hospital, and estimates of the expected total time spent in hospital over five years. For example, older patients and men were readmitted more frequently, though the total time in hospital was roughly constant with age. We also discuss the relative merits of parametric and semi-parametric multi-state models, and model assessment and comparison.
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Scalability of the muscular action in a parametric 3D model of the index finger.
Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio
2008-01-01
A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Iodice, Pierpaolo; Ferrante, Claudio; Brunetti, Luigi; Cabib, Simona; Protasi, Feliciano; Walton, Mark E; Pezzulo, Giovanni
2017-04-03
During decisions, animals balance goal achievement and effort management. Despite physical exercise and fatigue significantly affecting the levels of effort that an animal exerts to obtain a reward, their role in effort-based choice and the underlying neurochemistry are incompletely known. In particular, it is unclear whether fatigue influences decision (cost-benefit) strategies flexibly or only post-decision action execution and learning. To answer this question, we trained mice on a T-maze task in which they chose between a high-cost, high-reward arm (HR), which included a barrier, and a low-cost, low-reward arm (LR), with no barrier. The animals were parametrically fatigued immediately before the behavioural tasks by running on a treadmill. We report a sharp choice reversal, from the HR to LR arm, at 80% of their peak workload (PW), which was temporary and specific, as the mice returned to choose the HC when the animals were successively tested at 60% PW or in a two-barrier task. These rapid reversals are signatures of flexible choice. We also observed increased subcortical dopamine levels in fatigued mice: a marker of individual bias to use model-based control in humans. Our results indicate that fatigue levels can be incorporated in flexible cost-benefits computations that improve foraging efficiency.
NASA Technical Reports Server (NTRS)
1986-01-01
Over the past two decades, fiber optics has emerged as a highly practical and cost-efficient communications technology. Its competitiveness vis-a-vis other transmission media, especially satellite, has become a critical question. This report studies the likely evolution and application of fiber optic networks in the United States to the end of the century. The outlook for the technology of fiber systems is assessed and forecast, scenarios of the evolution of fiber optic network development are constructed, and costs to provide service are determined and examined parametrically as a function of network size and traffic carried. Volume 1 consists of the Executive Summary. Volume 2 focuses on fiber optic technology and long distance fiber optic networks. Volume 3 develops a traffic and financial model of a nationwide long distance transmission network. Among the study's most important conclusions are: revenue requirements per circuit for LATA-to-LATA fiber optic links are less than one cent per call minute; multiplex equipment, which is likely to be required in any competing system, is the largest contributor to circuit costs; the potential capacity of fiber optic cable is very large and as yet undefined; and fiber optic transmission combined with other network optimization schemes can lead to even lower costs than those identified in this study.
NASA Astrophysics Data System (ADS)
Cohen-Tannoudji, G.; El Hassouni, A.; Mantrach, A.; Oudrhiri-Safiani, E. G.
1982-09-01
We propose a simple parametrization of the nucleon valence structure functions at all x, all p ⊥ and all Q 2. We use the DTU parton model to fix the parametrization at a reference point ( Q {0/2}=3 GeV2) and we mimic the QCD evolution by replacing the dimensioned parameters of the DTU parton model by functions depending on Q 2. Excellent agreement is obtained with existing data.
Model-free estimation of the psychometric function
Żychaluk, Kamila; Foster, David H.
2009-01-01
A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355
Parametric laws to model urban pollutant dispersion with a street network approach
NASA Astrophysics Data System (ADS)
Soulhac, L.; Salizzoni, P.; Mejean, P.; Perkins, R. J.
2013-03-01
This study discusses the reliability of the street network approach for pollutant dispersion modelling in urban areas. This is essentially based on a box model, with parametric relations that explicitly model the main phenomena that contribute to the street canyon ventilation: the mass exchanges between the street and the atmosphere, the pollutant advection along the street axes and the pollutant transfer at street intersections. In the first part of the paper the focus is on the development of a model for the bulk transfer street/atmosphere, which represents the main ventilation mechanisms for wind direction that are almost perpendicular to the axis of the street. We then discuss the role of the advective transfer along the street axis on its ventilation, depending on the length of the street and the direction of the external wind. Finally we evaluate the performances of a box model integrating parametric exchange laws for these transfer phenomena. To that purpose we compare the prediction of the model to wind tunnel experiments of pollutant dispersion within a street canyon placed in an idealised urban district.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer's disease.
Lee, Spencer A W; Sposato, Luciano A; Hachinski, Vladimir; Cipriano, Lauren E
2017-03-16
Accurate and timely diagnosis of Alzheimer's disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of $11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over $50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers.
Huang, Min; Lou, Yanyan; Pellissier, James; Burke, Thomas; Liu, Frank Xiaoqing; Xu, Ruifeng; Velcheti, Vamsidhar
2017-08-01
Our objectives were to evaluate the cost effectiveness of pembrolizumab compared with standard-of-care (SoC) platinum-based chemotherapy as first-line treatment in patients with metastatic non-small-cell lung cancer (NSCLC) that expresses high levels of programmed death ligand-1 (PD-L1) [tumour proportion score (TPS) ≥50%], from a US third-party public healthcare payer perspective. We conducted a partitioned-survival model with a cycle length of 1 week and a base-case time horizon of 20 years. Parametric models were fitted to Kaplan-Meier estimates of time on treatment, progression-free survival and overall survival from the KEYNOTE-024 randomized clinical trial (patients aged ≥18 years with stage IV NSCLC, TPS ≥50%, without epidermal growth factor receptor (EGFR)-activating mutations or anaplastic lymphoma kinase (ALK) translocations who received no prior systemic chemotherapy) and validated with long-term registry data. Quality-adjusted life-years (QALYs) were calculated based on EuroQoL-5 Dimensions (EQ-5D) utility data collected in the trial. Costs ($US, year 2016 values) for drug acquisition/administration, adverse events and clinical management were included. Costs and outcomes were discounted at 3% per year. A series of deterministic and probabilistic sensitivity analyses were performed to test the robustness of the results. In the base-case scenario, pembrolizumab resulted in an expected gain of 1.31 life-years (LYs) and 1.05 QALYs and an incremental cost of $US102,439 compared with SoC. The incremental cost per QALY gain was $US97,621/QALY and the incremental cost per LY gain was $US78,344/LY. Pembrolizumab is projected to be a cost-effective option compared with SoC platinum-based chemotherapy as first-line treatment in adults with metastatic NSCLC expressing high levels of PD-L1.
1987-03-01
would be transcribed as L =AX - V where L, X, and V are the vectors of constant terms, parametric corrections , and b_o bresiduals, respectively. The...tensor. a Just as du’ represents the parametric corrections in tensor notations, the necessary associated metric tensor a’ corresponds to the variance...observations, n residuals, and 0 n- parametric corrections to X (an initial set of parameters), respectively. b 0 b The vctor L is formed as 1. L where
Medvigy, David; Moorcroft, Paul R
2012-01-19
Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
SHIPS: Spectral Hierarchical Clustering for the Inference of Population Structure in Genetic Studies
Bouaziz, Matthieu; Paccard, Caroline; Guedj, Mickael; Ambroise, Christophe
2012-01-01
Inferring the structure of populations has many applications for genetic research. In addition to providing information for evolutionary studies, it can be used to account for the bias induced by population stratification in association studies. To this end, many algorithms have been proposed to cluster individuals into genetically homogeneous sub-populations. The parametric algorithms, such as Structure, are very popular but their underlying complexity and their high computational cost led to the development of faster parametric alternatives such as Admixture. Alternatives to these methods are the non-parametric approaches. Among this category, AWclust has proven efficient but fails to properly identify population structure for complex datasets. We present in this article a new clustering algorithm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS), based on a divisive hierarchical clustering strategy, allowing a progressive investigation of population structure. This method takes genetic data as input to cluster individuals into homogeneous sub-populations and with the use of the gap statistic estimates the optimal number of such sub-populations. SHIPS was applied to a set of simulated discrete and admixed datasets and to real SNP datasets, that are data from the HapMap and Pan-Asian SNP consortium. The programs Structure, Admixture, AWclust and PCAclust were also investigated in a comparison study. SHIPS and the parametric approach Structure were the most accurate when applied to simulated datasets both in terms of individual assignments and estimation of the correct number of clusters. The analysis of the results on the real datasets highlighted that the clusterings of SHIPS were the more consistent with the population labels or those produced by the Admixture program. The performances of SHIPS when applied to SNP data, along with its relatively low computational cost and its ease of use make this method a promising solution to infer fine-scale genetic patterns. PMID:23077494
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J
2013-08-01
Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.
PARAMETRIC DISTANCE WEIGHTING OF LANDSCAPE INFLUENCE ON STREAMS
We present a parametric model for estimating the areas within watersheds whose land use best predicts indicators of stream ecological condition. We regress a stream response variable on the distance-weighted proportion of watershed area that has a specific land use, such as agric...
NASA Astrophysics Data System (ADS)
Bereau, Tristan; Wang, Zun-Jing; Deserno, Markus
2014-03-01
Interfacial systems are at the core of fascinating phenomena in many disciplines, such as biochemistry, soft-matter physics, and food science. However, the parametrization of accurate, reliable, and consistent coarse-grained (CG) models for systems at interfaces remains a challenging endeavor. In the present work, we explore to what extent two independently developed solvent-free CG models of peptides and lipids—of different mapping schemes, parametrization methods, target functions, and validation criteria—can be combined by only tuning the cross-interactions. Our results show that the cross-parametrization can reproduce a number of structural properties of membrane peptides (for example, tilt and hydrophobic mismatch), in agreement with existing peptide-lipid CG force fields. We find encouraging results for two challenging biophysical problems: (i) membrane pore formation mediated by the cooperative action of several antimicrobial peptides, and (ii) the insertion and folding of the helix-forming peptide WALP23 in the membrane.
Parametric instability of shaft with discs
NASA Astrophysics Data System (ADS)
Wahab, A. M. Abdul; Rasid, Z. A.; Abu, A.; Rudin, N. F. Mohd Noor
2017-12-01
The occurrence of resonance is a major criterion to be considered in the design of shaft. While force resonance occurs merely when the natural frequency of the rotor system equals speed of the shaft, parametric resonance or parametric instability can occur at excitation speed that is integral or sub-multiple of the frequency of the rotor. This makes the study on parametric resonance crucial. Parametric instability of a shaft system consisting of a shaft and disks has been investigated in this study. The finite element formulation of the Mathieu-Hill equation that represents the parametric instability problem of the shaft is developed based on Timoshenko’s beam theory and Nelson’s finite element method (FEM) model that considers the effect of torsional motion on such problem. The Bolotin’s method is used to determine the regions of instability and the Strut-Ince diagram. The validation works show that the results of this study are in close agreement to past results. It is found that a larger radius of disk will cause the shaft to become more unstable compared to smaller radius although both weights are similar. Furthermore, the effect of torsional motion on the parametric instability of the shaft is significant at higher rotating speed.
Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy.
Yin, Xiuxing; Pan, Li
2018-01-01
A direct adaptive robust tracking control is proposed for trajectory tracking of 6 DOF industrial robot in the presence of parametric uncertainties, external disturbances and uncertain nonlinearities. The controller is designed based on the dynamic characteristics in the working space of the end-effector of the 6 DOF robot. The controller includes robust control term and model compensation term that is developed directly based on the input reference or desired motion trajectory. A projection-type parametric adaptation law is also designed to compensate for parametric estimation errors for the adaptive robust control. The feasibility and effectiveness of the proposed direct adaptive robust control law and the associated projection-type parametric adaptation law have been comparatively evaluated based on two 6 DOF industrial robots. The test results demonstrate that the proposed control can be employed to better maintain the desired trajectory tracking even in the presence of large parametric uncertainties and external disturbances as compared with PD controller and nonlinear controller. The parametric estimates also eventually converge to the real values along with the convergence of tracking errors, which further validate the effectiveness of the proposed parametric adaption law. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
3D Product Development for Loose-Fitting Garments Based on Parametric Human Models
NASA Astrophysics Data System (ADS)
Krzywinski, S.; Siegmund, J.
2017-10-01
Researchers and commercial suppliers worldwide pursue the objective of achieving a more transparent garment construction process that is computationally linked to a virtual body, in order to save development costs over the long term. The current aim is not to transfer the complete pattern making step to a 3D design environment but to work out basic constructions in 3D that provide excellent fit due to their accurate construction and morphological pattern grading (automatic change of sizes in 3D) in respect of sizes and body types. After a computer-aided derivation of 2D pattern parts, these can be made available to the industry as a basis on which to create more fashionable variations.
Robust H∞ output-feedback control for path following of autonomous ground vehicles
NASA Astrophysics Data System (ADS)
Hu, Chuan; Jing, Hui; Wang, Rongrong; Yan, Fengjun; Chadli, Mohammed
2016-03-01
This paper presents a robust H∞ output-feedback control strategy for the path following of autonomous ground vehicles (AGVs). Considering the vehicle lateral velocity is usually hard to measure with low cost sensor, a robust H∞ static output-feedback controller based on the mixed genetic algorithms (GA)/linear matrix inequality (LMI) approach is proposed to realize the path following without the information of the lateral velocity. The proposed controller is robust to the parametric uncertainties and external disturbances, with the parameters including the tire cornering stiffness, vehicle longitudinal velocity, yaw rate and road curvature. Simulation results based on CarSim-Simulink joint platform using a high-fidelity and full-car model have verified the effectiveness of the proposed control approach.
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less
Cost-effective conservation of an endangered frog under uncertainty.
Rose, Lucy E; Heard, Geoffrey W; Chee, Yung En; Wintle, Brendan A
2016-04-01
How should managers choose among conservation options when resources are scarce and there is uncertainty regarding the effectiveness of actions? Well-developed tools exist for prioritizing areas for one-time and binary actions (e.g., protect vs. not protect), but methods for prioritizing incremental or ongoing actions (such as habitat creation and maintenance) remain uncommon. We devised an approach that combines metapopulation viability and cost-effectiveness analyses to select among alternative conservation actions while accounting for uncertainty. In our study, cost-effectiveness is the ratio between the benefit of an action and its economic cost, where benefit is the change in metapopulation viability. We applied the approach to the case of the endangered growling grass frog (Litoria raniformis), which is threatened by urban development. We extended a Bayesian model to predict metapopulation viability under 9 urbanization and management scenarios and incorporated the full probability distribution of possible outcomes for each scenario into the cost-effectiveness analysis. This allowed us to discern between cost-effective alternatives that were robust to uncertainty and those with a relatively high risk of failure. We found a relatively high risk of extinction following urbanization if the only action was reservation of core habitat; habitat creation actions performed better than enhancement actions; and cost-effectiveness ranking changed depending on the consideration of uncertainty. Our results suggest that creation and maintenance of wetlands dedicated to L. raniformis is the only cost-effective action likely to result in a sufficiently low risk of extinction. To our knowledge we are the first study to use Bayesian metapopulation viability analysis to explicitly incorporate parametric and demographic uncertainty into a cost-effective evaluation of conservation actions. The approach offers guidance to decision makers aiming to achieve cost-effective conservation under uncertainty. © 2015 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Bravo, Teresa; Maury, Cédric
2018-07-01
Enhancing the attenuation or the absorption of low-frequency noise using lightweight bulk-reacting liners is still a demanding task in surface and air transport systems. The aim of this study is to understand the physical mechanisms involved in the attenuation and absorption properties of partitions made up of a thin micro-perforated panel (MPP) rigidly backed by a cavity filled with anisotropic fibrous material. Such a layout is denoted as a MPPF partition. Analytical models are formulated in the flow and no-flow cases to predict the axial damping of the least attenuated wave in a MPPF partition as well as the plane wave absorption coefficient. They account for a rigid or an elastic MPP facing a bulk-reacting fully-anisotropic material. A cost-efficient solution of the propagation constant for the least attenuated mode is obtained using a simulated annealing search method as well as a low-frequency approximation to the axial attenuation. The normal incidence absorption model is assessed in the no-flow case against pressure-velocity measurements of the surface impedance over a MPPF partition filled with fibreglass material. A parametric study is conducted to evaluate the MPP and the cavity constitutive parameters that mostly enhance the axial attenuation and sound absorption properties, with special interest on the MPP airframe relative velocity. This sensitivity study provides guidelines that could be used to further reduce the search space in parametric or impedance optimization studies.
Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R
2013-01-01
The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.
NASA Technical Reports Server (NTRS)
1973-01-01
The general goal of this task, STDN Antenna and Preamplifier G/T Study, was to determine cost-effective combinations of antennas and preamplifiers for several sets of conditions for frequency, antenna elevation angle, and rain. The output of the study includes design curves and tables which indicate the best choice of antenna size and preamplifier type to provide a given G/T performance. The report indicates how to evaluate the cost effectiveness of proposed improvements to a given station. Certain parametric variations are presented to emphasize the improvement available by reducing RF losses and improving the antenna feed.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Artificial neural networks for the performance prediction of heat pump hot water heaters
NASA Astrophysics Data System (ADS)
Mathioulakis, E.; Panaras, G.; Belessiotis, V.
2018-02-01
The rapid progression in the use of heat pumps, due to the decrease in the equipment cost, together with the favourable economics of the consumed electrical energy, has been combined with the wide dissemination of air-to-water heat pumps (AWHPs) in the residential sector. The entrance of the respective systems in the commercial sector has made important the modelling of the processes. In this work, the suitability of artificial neural networks (ANN) in the modelling of AWHPs is investigated. The ambient air temperature in the evaporator inlet and the water temperature in the condenser inlet have been selected as the input variables; energy performance indices and quantities characterising the operation of the system have been selected as output variables. The results verify that the, easy-to-implement, trained ANN can represent an effective tool for the prediction of the AWHP performance in various operation conditions and the parametrical investigation of their behaviour.
Hemodynamics of a Patient-Specific Aneurysm Model with Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Modarres-Sadeghi, Yahya
2017-11-01
Wall shear stress (WSS) and oscillatory shear index (OSI) are two of the most-widely studied hemodynamic quantities in cardiovascular systems that have been shown to have the ability to elicit biological responses of the arterial wall, which could be used to predict the aneurysm development and rupture. In this study, a reduced-order model (ROM) of the hemodynamics of a patient-specific cerebral aneurysm is studied. The snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases of the flow using a CFD training set with known inflow parameters. It was shown that the area of low WSS and high OSI is correlated to higher POD modes. The resulting ROM can reproduce both WSS and OSI computationally for future parametric studies with significantly less computational cost. Agreement was observed between the WSS and OSI values obtained using direct CFD results and ROM results.
Study of a Tracking and Data Acquisition System (TDAS) in the 1990's
NASA Technical Reports Server (NTRS)
1981-01-01
Progress in concept definition studies, operational assessments, and technology demonstrations for the Tracking and Data Acquisition System (TDAS) is reported. The proposed TDAS will be the follow-on to the Tracking and Data Relay Satellite System and will function as a key element of the NASA End-to-End Data System, providing the tracking and data acquisition interface between user accessible data ports on Earth and the user's spaceborne equipment. Technical activities of the "spacecraft data system architecture' task and the "communication mission model' task are emphasized. The objective of the first task is to provide technology forecasts for sensor data handling, navigation and communication systems, and estimate corresponding costs. The second task is concerned with developing a parametric description of the required communication channels. Other tasks with significant activity include the "frequency plan and radio interference model' and the "Viterbi decoder/simulator study'.