Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Computer Center CDC Libraries/NSRD (Subprograms).
1984-06-01
VALUES Y - ARRAY OR CORRESPONDING Y-VALUES N - NUMBER OF VALUES CM REQUIRED: IOOB ERROR MESSAGE ’ L=XXXXX, X=X.XXXXXXX E+YY, X NOT MONOTONE STOP SELF ...PARAMETERS (SUBSEQUENT REPORTS MAY BE UNSOLICITED) . PCRTP1 - REQUEST TERMINAL PARAMETERS (SUBSEQUENT REPORTS ONLY IN RESPOSE TO HOST REQUEST) DA - REQUEST
Black hole complementarity in gravity's rainbow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gim, Yongwan; Kim, Wontae, E-mail: yongwan89@sogang.ac.kr, E-mail: wtkim@sogang.ac.kr
2015-05-01
To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, itmore » might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.« less
Procedures for establishing geotechnical design parameters from two data sources.
DOT National Transportation Integrated Search
2013-07-01
The Missouri Department of Transportation (MoDOT) recently adopted new provisions for geotechnical design that require that : the mean value and the coefficient of variation (COV) for the mean value of design parameters be established in order to : d...
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Reference values of clinical chemistry and hematology parameters in rhesus monkeys (Macaca mulatta).
Chen, Younan; Qin, Shengfang; Ding, Yang; Wei, Lingling; Zhang, Jie; Li, Hongxia; Bu, Hong; Lu, Yanrong; Cheng, Jingqiu
2009-01-01
Rhesus monkey models are valuable to the studies of human biology. Reference values for clinical chemistry and hematology parameters of rhesus monkeys are required for proper data interpretation. Whole blood was collected from 36 healthy Chinese rhesus monkeys (Macaca mulatta) of either sex, 3 to 5 yr old. Routine chemistry and hematology parameters, and some special coagulation parameters including thromboelastograph and activities of coagulation factors were tested. We presented here the baseline values of clinical chemistry and hematology parameters in normal Chinese rhesus monkeys. These data may provide valuable information for veterinarians and investigators using rhesus monkeys in experimental studies.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed th...
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
40 CFR 98.35 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units subject...
40 CFR 98.35 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units subject...
40 CFR 98.35 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units subject...
40 CFR 98.35 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units subject...
40 CFR 98.35 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units subject...
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Outdoor ground impedance models.
Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram
2011-05-01
Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
The Value of Certainty (Invited)
NASA Astrophysics Data System (ADS)
Barkstrom, B. R.
2009-12-01
It is clear that Earth science data are valued, in part, for their ability to provide some certainty about the past state of the Earth and about its probable future states. We can sharpen this notion by using seven categories of value ● Warning Service, requiring latency of three hours or less, as well as uninterrupted service ● Information Service, requiring latency less than about two weeks, as well as unterrupted service ● Process Information, requiring ability to distinguish between alternative processes ● Short-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of five years or less, e.g. crop insurance ● Mid-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of twenty-five years or less, e.g. power plant siting ● Long-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of a century or less, e.g. one hundred year flood planning ● Doomsday Statistics, requiring ability to construct a reliable statistical record that is useful for reducing the impact of `doomsday' scenarios While the first two of these categories place high value on having an uninterrupted flow of information, and the third places value on contributing to our understanding of physical processes, it is notable that the last four may be placed on a common footing by considering the ability of observations to reduce uncertainty. Quantitatively, we can often identify metrics for parameters of interest that are fairly simple. For example, ● Detection of change in the average value of a single parameter, such as global temperature ● Detection of a trend, whether linear or nonlinear, such as the trend in cloud forcing known as cloud feedback ● Detection of a change in extreme value statistics, such as flood frequency or drought severity For such quantities, we can quantify uncertainty in terms of the entropy which is calculated by creating a set of discrete bins for the value and then using error estimates to assign probabilities, pi, to each bin. The entropy, H, is simply H = ∑i pi log2(1/pi) The value of a new set of observations is the information gain, I, which is I = Hprior - Hposterior The probability distributions that appear in this calculation depend on rigorous evaluation of errors in the observations. While direct estimates of the monetary value of data that could be used in budget prioritizations may not capture the value of data to the scientific community, it appears that the information gain may be a useful start in providing a `common currency' for evaluating projects that serve very different communities. In addition, from the standpoint of governmental accounting, it appears reasonable to assume that much of the expense for scientific data become sunk costs shortly after operations begin and that the real, long-term value is created by the effort scientists expend in creating the software that interprets the data and in the effort expended in calibration and validation. These efforts are the ones that directly contribute to the information gain that provides the value of these data.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
NASA Astrophysics Data System (ADS)
Ghorbanpour Arani, A.; Zamani, M. H.
2018-06-01
The present work deals with bending behavior of nanocomposite beam resting on two parameters modified Vlasov model foundation (MVMF), with consideration of agglomeration and distribution of carbon nanotubes (CNTs) in beam matrix. Equivalent fiber based on Eshelby-Mori-Tanaka approach is employed to determine influence of CNTs aggregation on elastic properties of CNT-reinforced beam. The governing equations are deduced using the principle of minimum potential energy under assumption of the Euler-Bernoulli beam theory. The MVMF required the estimation of γ parameter; to this purpose, unique iterative technique based on variational principles is utilized to compute value of the γ and subsequently fourth-order differential equation is solved analytically. Eventually, the transverse displacements and bending stresses are obtained and compared for different agglomeration parameters, various boundary conditions simultaneously and variant elastic foundation without requirement to instate values for foundation parameters.
Schumacher, Carsten; Eismann, Hendrik; Sieg, Lion; Friedrich, Lars; Scheinichen, Dirk; Vondran, Florian W R; Johanning, Kai
2018-01-01
Liver transplantation is a complex intervention, and early anticipation of personnel and logistic requirements is of great importance. Early identification of high-risk patients could prove useful. We therefore evaluated prognostic values of recipient parameters commonly available in the early preoperative stage regarding postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. All adult patients undergoing first liver transplantation at Hannover Medical School between January 2005 and December 2010 were included in this retrospective study. Demographic, clinical, and laboratory data as well as clinical courses were recorded. Prognostic values regarding 30- and 90-day outcomes were evaluated by uni- and multivariate statistical tests. Identified risk parameters were used to calculate risk scores. There were 426 patients (40.4% female) included with a mean age of 48.6 (11.9) years. Absolute 30-day mortality rate was 9.9%, and absolute 90-day mortality rate was 13.4%. Preoperative leukocyte count >5200/μL, platelet count <91 000/μL, and creatinine values ≥77 μmol/L were relevant risk factors for both observation periods ( P < .05, respectively). A score based on these factors significantly differentiated between groups of varying postoperative outcomes and intraoperative transfusion requirements ( P < .05, respectively). A score based on preoperative creatinine, leukocyte, and platelet values allowed early estimation of postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. Results might help to improve timely logistic and personal strategies.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
40 CFR 98.315 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... measured parameters used in the GHG emissions calculations is required (e.g., carbon content values, etc... such estimates. (a) For each missing value of the monthly carbon content of calcined petroleum coke the substitute data value shall be the arithmetic average of the quality-assured values of carbon contents for...
An Anaylsis of Control Requirements and Control Parameters for Direct-Coupled Turbojet Engines
NASA Technical Reports Server (NTRS)
Novik, David; Otto, Edward W.
1947-01-01
Requirements of an automatic engine control, as affected by engine characteristics, have been analyzed for a direct-coupled turbojet engine. Control parameters for various conditions of engine operation are discussed. A hypothetical engine control is presented to illustrate the use of these parameters. An adjustable speed governor was found to offer a desirable method of over-all engine control. The selection of a minimum value of fuel flow was found to offer a means of preventing unstable burner operation during steady-state operation. Until satisfactory high-temperature-measuring devices are developed, air-fuel ratio is considered to be a satisfactory acceleration-control parameter for the attainment of the maximum acceleration rates consistent with safe turbine temperatures. No danger of unstable burner operation exists during acceleration if a temperature-limiting acceleration control is assumed to be effective. Deceleration was found to be accompanied by the possibility of burner blow-out even if a minimum fuel-flow control that prevents burner blow-out during steady-state operation is assumed to be effective. Burner blow-out during deceleration may be eliminated by varying the value of minimum fuel flow as a function of compressor-discharge pressure, but in no case should the fuel flow be allowed to fall below the value required for steady-state burner operation.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio
NASA Astrophysics Data System (ADS)
Roquier, Gerard
2017-06-01
The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.
Parameter extraction and transistor models
NASA Technical Reports Server (NTRS)
Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI
1985-01-01
Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
Code of Federal Regulations, 2014 CFR
2014-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
The estimation of parameter compaction values for pavement subgrade stabilized with lime
NASA Astrophysics Data System (ADS)
Lubis, A. S.; Muis, Z. A.; Simbolon, C. A.
2018-02-01
The type of soil material, field control, maintenance and availability of funds are several factors that must be considered in compaction of the pavement subgrade. In determining the compaction parameters in laboratory desperately requires considerable materials, time and funds, and reliable laboratory operators. If the result of soil classification values can be used to estimate the compaction parameters of a subgrade material, so it would save time, energy, materials and cost on the execution of this work. This is also a clarification (cross check) of the work that has been done by technicians in the laboratory. The study aims to estimate the compaction parameter values ie. maximum dry unit weight (γdmax) and optimum water content (Wopt) of the soil subgrade that stabilized with lime. The tests that conducted in the laboratory of soil mechanics were to determine the index properties (Fines and Liquid Limit/LL) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) > 10% were made with additional 3% lime for 30 samples. By using the Goswami equation, the compaction parameter values can be estimated by equation γd max # = -0,1686 Log G + 1,8434 and Wopt # = 2,9178 log G + 17,086. From the validation calculation, there was a significant positive correlation between the compaction parameter values laboratory and the compaction parameter values estimated, with a 95% confidence interval as a strong relationship.
Estimates of the ionization association and dissociation constant (pKa) are vital to modeling the pharmacokinetic behavior of chemicals in vivo. Methodologies for the prediction of compound sequestration in specific tissues using partition coefficients require a parameter that ch...
40 CFR 80.92 - Baseline auditor requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Baseline auditor requirements. 80.92... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Anti-Dumping § 80.92 Baseline auditor requirements. (a... determination methodology, resulting baseline fuel parameter, volume and emissions values verified by an auditor...
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values-- that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed t...
Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stucky, D.J.; Zoeller, A.L.
1980-01-01
Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less
TU-FG-201-09: Predicting Accelerator Dysfunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, C; Nguyen, C; Baydush, A
Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less
USDA-ARS?s Scientific Manuscript database
Field scale water infiltration and soil-water and solute transport models require spatially-averaged “effective” soil hydraulic parameters to represent the average flux and storage. The values of these effective parameters vary for different conditions, processes, and component soils in a field. For...
40 CFR 63.751 - Monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...
40 CFR 63.751 - Monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...
40 CFR 63.751 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...
40 CFR 63.751 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...
40 CFR 63.751 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Channel Characterization for Free-Space Optical Communications
2012-07-01
parameters. From the path- average parameters, a 2nC profile model, called the HAP model, was constructed so that the entire channel from air to ground...SR), both of which are required to estimate the Power in the Bucket (PIB) and Power in the Fiber (PIF) associated with the FOENEX data beam. UCF was...of the path-average values of 2nC , the resulting HAP 2nC profile model led to values of ground level 2 nC that compared very well with actual
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Pulsed Electromagnetic Acceleration of Plasmas
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Cassibry, Jason T.; Markusic, Tom E.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
A major shift in paradigm in driving pulsed plasma thruster is necessary if the original goal of accelerating a plasma sheet efficiently to high velocities as a plasma "slug" is to be realized. Firstly, the plasma interior needs to be highly collisional so that it can be dammed by the plasma edge layer not (upstream) adjacent to the driving 'vacuum' magnetic field. Secondly, the plasma edge layer needs to be strongly magnetized so that its Hall parameter is of the order of unity in this region to ensure excellent coupling of the Lorentz force to the plasma. Thirdly, to prevent and/or suppress the occurrence of secondary arcs or restrike behind the plasma, the region behind the plasma needs to be collisionless and extremely magnetized with sufficiently large Hall parameter. This places a vacuum requirement on the bore conditions prior to the shot. These requirements are quantified in the paper and lead to the introduction of three new design parameters corresponding to these three plasma requirements. The first parameter, labeled in the paper as gamma (sub 1), pertains to the permissible ratio of the diffusive excursion of the plasma during the course of the acceleration to the plasma longitudinal dimension. The second parameter is the required Hall parameter of the edge plasma region, and the third parameter the required Hall parameter of the region behind the plasma. Experimental research is required to quantify the values of these design parameters. Based upon fundamental theory of the transport processes in plasma, some theoretical guidance on the choice of these parameters are provided to help designing the necessary experiments to acquire these data.
Impulse Current Waveform Compliance with IEC 60060-1
NASA Astrophysics Data System (ADS)
Sato, Shuji; Harada, Tatsuya; Yokoyama, Taizou; Sakaguchi, Sumiko; Ebana, Takao; Saito, Tatsunori
After numerous simulations, authors could unsuccessfully design an impulse current calibrator, whose output's time parameters (front time, T1 and time to half the peak, T2 ) are quite close to ideals defined in IEC 60060-1. The investigation for the failed trial was commenced. Using normalized damped oscillating waveform e-tsin(ωt), a relationship of the ratio T2/T1 and undershoot value are studied for all possible value for . With this relationship, it is derived that 1) one cannot generate an ideal wave form unless one has to accept a certain margin for the two parameters, 2) even with the allowable margin, one can generate a wave form only in a case a value for T1 is smaller and T2 is larger than standard values. In the paper, possible time parameter combination, which fulfils IEC 60060-1 requirements, is illustrated for a calibrator design.
Influences on cocaine tolerance assessed under a multiple conjunctive schedule of reinforcement.
Yoon, Jin Ho; Branch, Marc N
2009-11-01
Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the effects of cocaine on key-pecking responses of White Carneau pigeons maintained under a three-component multiple conjunctive FI (10 s, 30 s, & 120 s) FR (5 responses) schedule of food presentation. Dose-effect curves representing the effects of presession cocaine on responding were assessed in the context of (1) acute administration of cocaine (2) chronic administration of cocaine and (3) daily administration of saline. Chronic administration of cocaine generally resulted in tolerance to the response-rate decreasing effects of cocaine, and that tolerance was generally independent of relative FI value, as measured by changes in ED50 values. Daily administration of saline decreased ED50 values to those observed when cocaine was administered acutely. The results show that adding a FR requirement to FI schedules is not sufficient to produce schedule-parameter-specific tolerance. Tolerance to cocaine was generally independent of FI-parameter under the present conjunctive schedules, indicating that a ratio requirement, per se, is not sufficient for tolerance to be dependent on FI parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatayama, Ariyoshi; Ogasawara, Masatada; Yamauchi, Michinori
1994-08-01
Plasma size and other basic performance parameters for 1000-MW(electric) power production are calculated with the blanket energy multiplication factor, the M value, as a parameter. The calculational model is base don the International Thermonuclear Experimental Reactor (ITER) physics design guidelines and includes overall plant power flow. Plasma size decreases as the M value increases. However, the improvement in the plasma compactness and other basic performance parameters, such as the total plant power efficiency, becomes saturated above the M = 5 to 7 range. THus, a value in the M = 5 to 7 range is a reasonable choice for 1000-MW(electric)more » hybrids. Typical plasma parameters for 1000-MW(electric) hybrids with a value of M = 7 are a major radius of R = 5.2 m, minor radius of a = 1.7 m, plasma current of I{sub p} = 15 MA, and toroidal field on the axis of B{sub o} = 5 T. The concept of a thermal fission blanket that uses light water as a coolant is selected as an attractive candidate for electricity-producing hybrids. An optimization study is carried out for this blanket concept. The result shows that a compact, simple structure with a uniform fuel composition for the fissile region is sufficient to obtain optimal conditions for suppressing the thermal power increase caused by fuel burnup. The maximum increase in the thermal power is +3.2%. The M value estimated from the neutronics calculations is {approximately}7.0, which is confirmed to be compatible with the plasma requirement. These studies show that it is possible to use a tokamak fusion core with design requirements similar to those of ITER for a 1000-MW(electric) power reactor that uses existing thermal reactor technology for the blanket. 30 refs., 22 figs., 4 tabs.« less
Method for Predicting and Optimizing System Parameters for Electrospinning System
NASA Technical Reports Server (NTRS)
Wincheski, Russell A. (Inventor)
2011-01-01
An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.
Opieliński, Krzysztof J; Gudra, Tadeusz
2002-05-01
The effective ultrasonic energy radiation into the air of piezoelectric transducers requires using multilayer matching systems with accurately selected acoustic impedances and the thickness of particular layers. This problem is of particular importance in the case of ultrasonic transducers working at a frequency above 1 MHz. Because the possibilities of choosing material with required acoustic impedance are limited (the counted values cannot always be realised and applied in practice) it is necessary to correct the differences between theoretical values and the possibilities of practical application of given acoustic impedances. Such a correction can be done by manipulating other parameters of matching layers (e.g. by changing their thickness). The efficiency of the energy transmission from the piezoceramic transducer through different layers with different thickness enabling a compensation of non-ideal real values by changing their thickness was computer analysed. The result of this analysis is the conclusion that from the technological point of view a layer with defined thickness is easier and faster to produce than elaboration of a new material with required acoustic parameter.
Application of Artificial Neural Network to Optical Fluid Analyzer
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Nishida, Katsuhiko
1994-04-01
A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).
NASA Astrophysics Data System (ADS)
Xiao, Shou-Ne; Wang, Ming-Meng; Hu, Guang-Zhong; Yang, Guang-Wu
2017-09-01
In view of the problem that it's difficult to accurately grasp the influence range and transmission path of the vehicle top design requirements on the underlying design parameters. Applying directed-weighted complex network to product parameter model is an important method that can clarify the relationships between product parameters and establish the top-down design of a product. The relationships of the product parameters of each node are calculated via a simple path searching algorithm, and the main design parameters are extracted by analysis and comparison. A uniform definition of the index formula for out-in degree can be provided based on the analysis of out-in-degree width and depth and control strength of train carriage body parameters. Vehicle gauge, axle load, crosswind and other parameters with higher values of the out-degree index are the most important boundary conditions; the most considerable performance indices are the parameters that have higher values of the out-in-degree index including torsional stiffness, maximum testing speed, service life of the vehicle, and so on; the main design parameters contain train carriage body weight, train weight per extended metre, train height and other parameters with higher values of the in-degree index. The network not only provides theoretical guidance for exploring the relationship of design parameters, but also further enriches the application of forward design method to high-speed trains.
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
40 CFR Table 1 to Subpart Hhhh of... - Minimum Requirements for Monitoring and Recordkeeping
Code of Federal Regulations, 2010 CFR
2010-07-01
...-hour block averages. 2. Other process or control device parameters specified in your OMM b plan. As... value for each product manufactured during the operating day. 6. UF-to-latex ratio in the binder c For... Required if a thermal oxidizer is used to control formaldehyde emissions. b Required if process...
Theoretical performance analysis of doped optical fibers based on pseudo parameters
NASA Astrophysics Data System (ADS)
Karimi, Maryam; Seraji, Faramarz E.
2010-09-01
Characterization of doped optical fibers (DOFs) is an essential primary stage for design of DOF-based devices. This paper presents design of novel measurement techniques to determine DOFs parameters using mono-beam propagation in a low-loss medium by generating pseudo parameters for the DOFs. The designed techniques are able to characterize simultaneously the absorption, emission cross-sections (ACS and ECS), and dopant concentration of DOFs. In both the proposed techniques, we assume pseudo parameters for the DOFs instead of their actual values and show that the choice of these pseudo parameters values for design of DOF-based devices, such as erbium-doped fiber amplifier (EDFA), are appropriate and the resulting error is quite negligible when compared with the actual parameters values.Utilization of pseudo ACS and ECS values in design procedure of EDFAs does not require the measurement of background loss coefficient (BLC) and makes the rate equation of the DOFs simple. It is shown that by using the pseudo parameters values obtained by the proposed techniques, the error in the gain of a designed EDFA with a BLC of about 1 dB/km, are about 0.08 dB. It is further indicated that the same scenario holds good for BLC lower than 5 dB/m and higher than 12 dB/m. The proposed characterization techniques have simple procedures and are low cost that can have an advantageous use in manufacturing of the DOFs.
Constitutive parameter measurements of lossy materials
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.
1989-01-01
The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Perceiving while producing: Modeling the dynamics of phonological planning
Roon, Kevin D.; Gafos, Adamantios I.
2016-01-01
We offer a dynamical model of phonological planning that provides a formal instantiation of how the speech production and perception systems interact during online processing. The model is developed on the basis of evidence from an experimental task that requires concurrent use of both systems, the so-called response-distractor task in which speakers hear distractor syllables while they are preparing to produce required responses. The model formalizes how ongoing response planning is affected by perception and accounts for a range of results reported across previous studies. It does so by explicitly addressing the setting of parameter values in representations. The key unit of the model is that of the dynamic field, a distribution of activation over the range of values associated with each representational parameter. The setting of parameter values takes place by the attainment of a stable distribution of activation over the entire field, stable in the sense that it persists even after the response cue in the above experiments has been removed. This and other properties of representations that have been taken as axiomatic in previous work are derived by the dynamics of the proposed model. PMID:27440947
Giordano, Anna; Barresi, Antonello A; Fissore, Davide
2011-01-01
The aim of this article is to show a procedure to build the design space for the primary drying of a pharmaceuticals lyophilization process. Mathematical simulation of the process is used to identify the operating conditions that allow preserving product quality and meeting operating constraints posed by the equipment. In fact, product temperature has to be maintained below a limit value throughout the operation, and the sublimation flux has to be lower than the maximum value allowed by the capacity of the condenser, besides avoiding choking flow in the duct connecting the drying chamber to the condenser. Few experimental runs are required to get the values of the parameters of the model: the dynamic parameters estimation algorithm, an advanced tool based on the pressure rise test, is used to this purpose. A simple procedure is proposed to take into account parameters uncertainty and, thus, it is possible to find the recipes that allow fulfilling the process constraints within the required uncertainty range. The same approach can be effective to take into account the heterogeneity of the batch when designing the freeze-drying recipe. Copyright © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
14 CFR 125.228 - Flight data recorders: filtered data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
14 CFR 135.156 - Flight data recorders: filtered data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
14 CFR 125.228 - Flight data recorders: filtered data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
14 CFR 135.156 - Flight data recorders: filtered data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
14 CFR 135.156 - Flight data recorders: filtered data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
14 CFR 125.228 - Flight data recorders: filtered data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... when an original sensor signal has been changed in any way, other than changes necessary to: (1... sensor. (b) An original sensor signal for any flight recorder parameter required to be recorded under... original sensor signal value can be reconstructed from the recorded data. This demonstration requires that...
40 CFR 63.467 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... maintain records in written or electronic form specified in paragraphs (a)(1) through (7) of this section... idling emission rate and values of the monitoring parameters measured during the test. (5) Records of the... the weekly monitoring required by § 63.466(a)(3) for visual inspection and the length of continuous...
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
On the Explicit Determination of the Chapman-Jouguet Parameters for an Explosive Compound
2014-11-19
relations were tested for the very well characterise explosives PETN, HMX , RDX, TATB, TNT and the calculated values obtained for the C-J parameters...Cyclotrimethylenetrinitramine (RDX), Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine ( HMX ), Pentaerythritol tetranitrate (PETN) and Triamino...the Chapman-Jouguet parameters of PETN, HMX , RDX and TATB Table 1 below provides a summary of the relations in order of requirement to obtain the C
Comparison of z-known GRBs with the Main Groups of Bright BATSE Events
NASA Technical Reports Server (NTRS)
Mitrofanov, Igor G.; Sanin, Anton B.; Anfimov, Dmitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The small reference sample of six BATSE gamma-ray bursts with known redshifts from optical afterglows is compared with a comparison group of the 218 brightest BATSE bursts. These two groups are shown to be consistent both with respect to the distributions of the spectral peak parameter in the observer's frame and also with respect to the distributions of the frame-independent cosmological invariant parameter (CIP). Using the known values of the redshifts z for the reference sample, the rest-frame distribution of spectral parameters is built. The de-redshifted distribution of the spectral parameters of the reference sample is compared with distribution of these parameters for the comparison group after de-redshifting by the factor 1/(1+z), with z a free parameter. Requiring consistency between these two distributions produces a collective estimation of the best fitting redshifts z for the comparison group, z=1.8--3.6. These values can be considered as the average cosmological redshift of the sources of the brightest BATSE bursts. The most probable value of the peak energy of the spectrum in the rest frame is 920 keV, close to the rest mass of an electron-positron pair.
Multiple robustness in factorized likelihood models.
Molina, J; Rotnitzky, A; Sued, M; Robins, J M
2017-09-01
We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Protein dielectric constants determined from NMR chemical shift perturbations.
Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik
2013-11-13
Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.
Obtaining high g-values with low degree expansion of the phasefunction
NASA Astrophysics Data System (ADS)
Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.
1994-02-01
Analytic theory of anisotropic random flight requires the expansion of phase-functions in spherical harmonics. The number of terms should be limited while a g value should be obtained that is as high as possible. We describe how such a phase function can be constructed for a given number N of spherical components of the phasefunction, while obtaining a maximum value of the asymmetry parameter g.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.
Hedin, Emma; Bäck, Anna
2013-09-06
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Optical fiber designs for beam shaping
NASA Astrophysics Data System (ADS)
Farley, Kevin; Conroy, Michael; Wang, Chih-Hao; Abramczyk, Jaroslaw; Campbell, Stuart; Oulundsen, George; Tankala, Kanishka
2014-03-01
A large number of power delivery applications for optical fibers require beams with very specific output intensity profiles; in particular applications that require a focused high intensity beam typically image the near field (NF) intensity distribution at the exit surface of an optical fiber. In this work we discuss optical fiber designs that shape the output beam profile to more closely correspond to what is required in many real world industrial applications. Specifically we present results demonstrating the ability to transform Gaussian beams to shapes required for industrial applications and how that relates to system parameters such as beam product parameter (BPP) values. We report on the how different waveguide structures perform in the NF and show results on how to achieve flat-top with circular outputs.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Utility of a routine urinalysis in children who require clean intermittent catheterization.
Forster, C S; Haslam, D B; Jackson, E; Goldstein, S L
2017-10-01
Children who require clean intermittent catheterization (CIC) frequently have positive urine cultures. However, diagnosing a urinary tract infection (UTI) can be difficult, as there are no standardized criteria. Routine urinalysis (UA) has good predictive accuracy for UTI in the general pediatric population, but data are limited on the utility of routine UA in the population of children who require CIC. To determine the utility of UA parameters (e.g. leukocyte esterase, nitrites, and pyuria) to predict UTI in children who require CIC, and identify a composite UA that has maximal predictive accuracy for UTI. A cross-sectional study of 133 children who required CIC, and had a UA and urine culture sent as part of standard of care. Patients in the no-UTI group all had UA and urine cultures sent as part of routine urodynamics, and were asymptomatic. Patients included in the UTI group had growth of ≥50,000 colony-forming units/ml of a known uropathogen on urine culture, in addition to two or more of the following symptoms: fever, abdominal pain, back pain, foul-smelling urine, new or worse incontinence, and pain with catheterization. Categorical data were compared using Chi-squared test, and continuous data were compared with Student's t-test. Sensitivity, specificity, and positive and negative predictive values were calculated for individual UA parameters, as well as the composite UA. Logistic regression was performed on potential composite UA models to identify the model that best fit the data. There was a higher proportion of patients in the no-UTI group with negative leukocyte esterase compared with the UTI group. There was a higher proportion of patients with UTI who had large leukocyte esterase and positive nitrites compared with the no-UTI group (Summary Figure). There was no between-group difference in urinary white blood cells. Positive nitrites were the most specific (84.4%) for UTI. None of the parameters had a high positive predictive value, while all had high negative predictive values. The composite model with the best Akaike information criterion was >10 urinary white blood cells and either moderate or large leukocyte esterase, which had a positive predictive value of 33.3 and a negative predictive value of 90.4. Routine UA had limited sensitivity, but moderate specificity, in predicting UTI in children who required CIC. The composite UA and moderate or large leukocyte esterase both had good negative predictive values for the outcome of UTI. Copyright © 2017 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Parker, C. D.; Tommerdahl, J. B.
1972-01-01
The instrumentation requirements for a regenerative life support systems were studied to provide the earliest possible indication of a malfunction that will permit degradation of the environment. Four categories of parameters were investigated: environmental parameters that directly and immediately influence the health and safety of the cabin crew; subsystems' inputs to the cabin that directly maintain the cabin environmental parameters; indications for maintenance or repair; and parameters useful as diagnostic indicators. A data averager concept is introduced which provides a moving average of parameter values that is not influenced by spurious changes, and is convenient for detecting parameter rates of change. A system is included to provide alarms at preselected parameter levels.
Tomblin Murphy, Gail; Birch, Stephen; MacKenzie, Adrian; Rigby, Janet
2016-12-12
As part of efforts to inform the development of a global human resources for health (HRH) strategy, a comprehensive methodology for estimating HRH supply and requirements was described in a companion paper. The purpose of this paper is to demonstrate the application of that methodology, using data publicly available online, to simulate the supply of and requirements for midwives, nurses, and physicians in the 32 high-income member countries of the Organisation for Economic Co-operation and Development (OECD) up to 2030. A model combining a stock-and-flow approach to simulate the future supply of each profession in each country-adjusted according to levels of HRH participation and activity-and a needs-based approach to simulate future HRH requirements was used. Most of the data to populate the model were obtained from the OECD's online indicator database. Other data were obtained from targeted internet searches and documents gathered as part of the companion paper. Relevant recent measures for each model parameter were found for at least one of the included countries. In total, 35% of the desired current data elements were found; assumed values were used for the other current data elements. Multiple scenarios were used to demonstrate the sensitivity of the simulations to different assumed future values of model parameters. Depending on the assumed future values of each model parameter, the simulated HRH gaps across the included countries could range from shortfalls of 74 000 midwives, 3.2 million nurses, and 1.2 million physicians to surpluses of 67 000 midwives, 2.9 million nurses, and 1.0 million physicians by 2030. Despite important gaps in the data publicly available online and the short time available to implement it, this paper demonstrates the basic feasibility of a more comprehensive, population needs-based approach to estimating HRH supply and requirements than most of those currently being used. HRH planners in individual countries, working with their respective stakeholder groups, would have more direct access to data on the relevant planning parameters and would thus be in an even better position to implement such an approach.
McGill, L A; Ferreira, P F; Scott, A D; Nielles-Vallespin, S; Giannakidis, A; Kilner, P J; Gatehouse, P D; de Silva, R; Firmin, D N; Pennell, D J
2016-01-06
In vivo cardiac diffusion tensor imaging (cDTI) is uniquely capable of interrogating laminar myocardial dynamics non-invasively. A comprehensive dataset of quantative parameters and comparison with subject anthropometrics is required. cDTI was performed at 3T with a diffusion weighted STEAM sequence. Data was acquired from the mid left ventricle in 43 subjects during the systolic and diastolic pauses. Global and regional values were determined for fractional anisotropy (FA), mean diffusivity (MD), helix angle gradient (HAg, degrees/%depth) and the secondary eigenvector angulation (E2A). Regression analysis was performed between global values and subject anthropometrics. All cDTI parameters displayed regional heterogeneity. The RR interval had a significant, but clinically small effect on systolic values for FA, HAg and E2A. Male sex and increasing left ventricular end diastolic volume were associated with increased systolic HAg. Diastolic HAg and systolic E2A were both directly related to left ventricular mass and body surface area. There was an inverse relationship between E2A mobility and both age and ejection fraction. Future interpretations of quantitative cDTI data should take into account anthropometric variations observed with patient age, body surface area and left ventricular measurements. Further work determining the impact of technical factors such as strain and SNR is required.
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2011 CFR
2011-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2013 CFR
2013-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2012 CFR
2012-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2014 CFR
2014-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
NASA Astrophysics Data System (ADS)
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Testable solution of the cosmological constant and coincidence problems
NASA Astrophysics Data System (ADS)
Shaw, Douglas J.; Barrow, John D.
2011-02-01
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Cycle 24 HST+COS Target Acquisition Monitor Summary
NASA Astrophysics Data System (ADS)
Penton, Steven V.; White, James
2018-06-01
HST/COS calibration program 14847 (P14857) was designed to verify that all three COS Target Acquisition (TA) modes were performing nominally during Cycle 24. The program was designed not only to determine if any of the COS TA flight software (FSW) patchable constants need updating but also to determine the values of any required parameter updates. All TA modes were determined to be performing nominally during the Cycle 24 calendar period of October 1, 2016 - October 1, 2017. No COS SIAF, TA subarray, or FSW parameter updates were required as a result of this program.
ERIC Educational Resources Information Center
Pinkston, Jonathan W.; Branch, Marc N.
2004-01-01
Daily administration of cocaine often results in the development of tolerance to its effects on responding maintained by fixed-ratio schedules. Such effects have been observed to be greater when the ratio value is small, whereas less or no tolerance has been observed at large ratio values. Similar schedule-parameter-dependent tolerance, however,…
Hughes, Douglas A.
2006-04-04
A method and system are provided for determining the torque required to launch a vehicle having a hybrid drive-train that includes at least two independently operable prime movers. The method includes the steps of determining the value of at least one control parameter indicative of a vehicle operating condition, determining the torque required to launch the vehicle from the at least one determined control parameter, comparing the torque available from the prime movers to the torque required to launch the vehicle, and controlling operation of the prime movers to launch the vehicle in response to the comparing step. The system of the present invention includes a control unit configured to perform the steps of the method outlined above.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
VERIFICATION AND VALIDATION OF THE SPARC MODEL
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...
A theoretical study of potentially observable chirality-sensitive NMR effects in molecules.
Garbacz, Piotr; Cukras, Janusz; Jaszuński, Michał
2015-09-21
Two recently predicted nuclear magnetic resonance effects, the chirality-induced rotating electric polarization and the oscillating magnetization, are examined for several experimentally available chiral molecules. We discuss in detail the requirements for experimental detection of chirality-sensitive NMR effects of the studied molecules. These requirements are related to two parameters: the shielding polarizability and the antisymmetric part of the nuclear magnetic shielding tensor. The dominant second contribution has been computed for small molecules at the coupled cluster and density functional theory levels. It was found that DFT calculations using the KT2 functional and the aug-cc-pCVTZ basis set adequately reproduce the CCSD(T) values obtained with the same basis set. The largest values of parameters, thus most promising from the experimental point of view, were obtained for the fluorine nuclei in 1,3-difluorocyclopropene and 1,3-diphenyl-2-fluoro-3-trifluoromethylcyclopropene.
Scholz, Norman; Behnke, Thomas; Resch-Genger, Ute
2018-01-01
Micelles are of increasing importance as versatile carriers for hydrophobic substances and nanoprobes for a wide range of pharmaceutical, diagnostic, medical, and therapeutic applications. A key parameter indicating the formation and stability of micelles is the critical micelle concentration (CMC). In this respect, we determined the CMC of common anionic, cationic, and non-ionic surfactants fluorometrically using different fluorescent probes and fluorescence parameters for signal detection and compared the results with conductometric and surface tension measurements. Based upon these results, requirements, advantages, and pitfalls of each method are discussed. Our study underlines the versatility of fluorometric methods that do not impose specific requirements on surfactants and are especially suited for the quantification of very low CMC values. Conductivity and surface tension measurements yield smaller uncertainties particularly for high CMC values, yet are more time- and substance consuming and not suitable for every surfactant.
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
Entanglement-Assisted Weak Value Amplification
NASA Astrophysics Data System (ADS)
Pang, Shengshi; Dressel, Justin; Brun, Todd A.
2014-07-01
Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
Prediction of quantitative intrathoracic fluid volume to diagnose pulmonary oedema using LabVIEW.
Urooj, Shabana; Khan, M; Ansari, A Q; Lay-Ekuakille, Aimé; Salhan, Ashok K
2012-01-01
Pulmonary oedema is a life-threatening disease that requires special attention in the area of research and clinical diagnosis. Computer-based techniques are rarely used to quantify the intrathoracic fluid volume (IFV) for diagnostic purposes. This paper discusses a software program developed to detect and diagnose pulmonary oedema using LabVIEW. The software runs on anthropometric dimensions and physiological parameters, mainly transthoracic electrical impedance (TEI). This technique is accurate and faster than existing manual techniques. The LabVIEW software was used to compute the parameters required to quantify IFV. An equation relating per cent control and IFV was obtained. The results of predicted TEI and measured TEI were compared with previously reported data to validate the developed program. It was found that the predicted values of TEI obtained from the computer-based technique were much closer to the measured values of TEI. Six new subjects were enrolled to measure and predict transthoracic impedance and hence to quantify IFV. A similar difference was also observed in the measured and predicted values of TEI for the new subjects.
Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jung Heon; Wu, Kesheng; Simon, Horst D.
2014-03-01
VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by themore » false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.« less
NASA Astrophysics Data System (ADS)
Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav
2004-08-01
Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.
Optimization of the fiber laser parameters for local high-temperature impact on metal
NASA Astrophysics Data System (ADS)
Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.
2016-11-01
This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.
Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications
Bäck, Anna
2013-01-01
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Nie, H. T.; Wan, Y. J.; You, J. H.; Wang, Z. Y.; Lan, S.; Fan, Y. X.; Wang, F.
2015-01-01
This research aimed to define the energy requirement of Dorper and Hu Hybrid F1 ewes 20 to 50 kg of body weight, furthermore to study energy requirement changes with age and evaluate the effect of age on energy requirement parameters. In comparative slaughter trial, thirty animals were divided into three dry matter intake treatments (ad libitum, n = 18; low restricted, n = 6; high restricted, n = 6), and were all slaughtered as baseline, intermediate, and final slaughter groups, to calculate body chemical components and energy retained. In digestibility trial, twelve ewes were housed in individual metabolic cages and randomly assigned to three feeding treatments in accordance with the design of a comparative slaughter trial, to evaluate dietary energetic values at different feed intake levels. The combined data indicated that, with increasing age, the net energy requirement for maintenance (NEm) decreased from 260.62±13.21 to 250.61±11.79 kJ/kg0.75 of shrunk body weight (SBW)/d, and metabolizable energy requirement for maintenance (MEm) decreased from 401.99±20.31 to 371.23±17.47 kJ/kg0.75 of SBW/d. Partial efficiency of ME utilization for maintenance (km, 0.65 vs 0.68) and growth (kg, 0.42 vs 0.41) did not differ (p>0.05) due to age; At the similar condition of average daily gain, net energy requirements for growth (NEg) and metabolizable energy requirements for growth (MEg) for ewes during late fattening period were 23% and 25% greater than corresponding values of ewes during early fattening period. In conclusion, the effect of age upon energy requirement parameters in the present study were similar in tendency with previous recommendations, values of energy requirement for growth (NEg and MEg) for Dorper and Hu crossbred female lambs ranged between the NRC (2007) recommendation for early and later maturating growing sheep. PMID:26104522
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Testable solution of the cosmological constant and coincidence problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Douglas J.; Barrow, John D.
2011-02-15
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less
Tietze, Anna; Mouridsen, Kim; Mikkelsen, Irene Klærke
2015-06-01
Accurate quantification of hemodynamic parameters using dynamic contrast enhanced (DCE) MRI requires a measurement of tissue T 1 prior to contrast injection (T 1). We evaluate (i) T 1 estimation using the variable flip angle (VFA) and the saturation recovery (SR) techniques and (ii) investigate if accurate estimation of DCE parameters outperform a time-saving approach with a predefined T 1 value when differentiating high- from low-grade gliomas. The accuracy and precision of T 1 measurements, acquired by VFA and SR, were investigated by computer simulations and in glioma patients using an equivalence test (p > 0.05 showing significant difference). The permeability measure, K trans, cerebral blood flow (CBF), and - volume, V p, were calculated in 42 glioma patients, using fixed T 1 of 1500 ms or an individual T 1 measurement, using SR. The areas under the receiver operating characteristic curves (AUCs) were used as measures for accuracy to differentiate tumor grade. The T 1 values obtained by VFA showed larger variation compared to those obtained using SR both in the digital phantom and the human data (p > 0.05). Although a fixed T 1 introduced a bias into the DCE calculation, this had only minor impact on the accuracy differentiating high-grade from low-grade gliomas, (AUCfix = 0.906 and AUCind = 0.884 for K trans; AUCfix = 0.863 and AUCind = 0.856 for V p; p for AUC comparison > 0.05). T 1 measurements by VFA were less precise, and the SR method is preferable, when accurate parameter estimation is required. Semiquantitative DCE values, based on predefined T 1 values, were sufficient to perform tumor grading in our study.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Mateu-de Antonio, Javier; Florit-Sureda, Marta
2016-07-01
Hypertriglyceridemia is a frequent metabolic complication associated with fat administration in parenteral nutrition (PN). No clear guidelines have been published on how to proceed once hypertriglyceridemia has been detected. A new strategy could be to substitute the initial fat emulsion with another emulsion with faster clearance. Our objective was to determine the effectiveness in reducing triglyceridemia values, maintaining the caloric intake, and improving nutrition parameters in patients who had moderate hypertriglyceridemia during PN when an olive oil-based fat emulsion (OOFE) was substituted with a multiple-source oil fat emulsion (MOFE). We also assessed the safety of this substitution in hepatic and glycemic profiles. We performed a retrospective, observational study that included 38 adult patients to whom OOFE in PN was substituted with MOFE when moderate hypertriglyceridemia (≥250-400 mg/dL) was detected. Triglyceridemia values decreased in 36 (94.7%) patients. The mean reduction was 71 (88-22) mg/dL. Fat load was slightly reduced after substitution (-0.14 [-0.23 to 0] g/kg/d; P < .001), but total caloric intake increased from 22.5 (19.7-25.1) to 23.1 (19.8-26.8) kcal/kg/d (P = .053). After substitution, nutrition parameters improved, liver parameters remained unchanged, and insulin requirements increased. The substitution of OOFE with MOFE in patients with moderate hypertriglyceridemia during PN resulted in a reduction in triglyceridemia values of about 70 mg/dL. That allowed maintaining the caloric intake and improved nutrition parameters without affecting the hepatic profile. For some patients, insulin requirements increased moderately. © 2014 American Society for Parenteral and Enteral Nutrition.
Rotor design for maneuver performance
NASA Technical Reports Server (NTRS)
Berry, John D.; Schrage, Daniel
1986-01-01
A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.
Hobbs, Brian P.; Chandler, Adam G.; Anderson, Ella F.; Herron, Delise H.; Charnsangavej, Chusilp; Yao, James
2013-01-01
Purpose To assess the effects of acquisition duration on computed tomographic (CT) perfusion parameter values in neuroendocrine liver metastases and normal liver tissue. Materials and Methods This retrospective study was institutional review board approved, with waiver of informed consent. CT perfusion studies in 16 patients (median age, 57.5 years; range, 42.0–69.7 years), including six men (median, 54.1 years; range, 42.0–69.7), and 10 women (median, 59.3 years; range 43.6–66.3), with neuroendocrine liver metastases were analyzed by means of distributed parametric modeling to determine tissue blood flow, blood volume, mean transit time, permeability, and hepatic arterial fraction for tumors and normal liver tissue. Analyses were undertaken with acquisition time of 12–590 seconds. Nonparameteric regression analyses were used to evaluate the functional relationships between CT perfusion parameters and acquisition duration. Evidence for time invariance was evaluated for each parameter at multiple time points by inferring the fitted derivative to assess its proximity to zero as a function of acquisition time by using equivalence tests with three levels of confidence (20%, 70%, and 90%). Results CT perfusion parameter values varied, approaching stable values with increasing acquisition duration. Acquisition duration greater than 160 seconds was required to obtain at least low confidence stability in any of the CT perfusion parameters. At 160 seconds of acquisition, all five CT perfusion parameters stabilized with low confidence in tumor and normal tissues, with the exception of hepatic arterial fraction in tumors. After 220 seconds of acquisition, there was stabilization with moderate confidence for blood flow, blood volume, and hepatic arterial fraction in tumors and normal tissue, and for mean transit time in tumors; however, permeability values did not satisfy the moderate stabilization criteria in both tumors and normal tissue until 360 seconds of acquisition. Blood flow, mean transit time, permeability, and hepatic arterial fraction were significantly different between tumor and normal tissue at 360 seconds (P < .001). Conclusion CT perfusion parameter values are affected by acquisition duration and approach progressively stable values with increasing acquisition times. © RSNA, 2013 Online supplemental material is available for this article. PMID:23824990
Diurnal variations in blood gases and metabolites for draught Zebu and Simmental oxen.
Zanzinger, J; Hoffmann, I; Becker, K
1994-01-01
In previous articles it has been shown that blood parameters may be useful to assess physical fitness in draught cattle. The aim of the present study was to detect possible variations in baseline values for the key metabolites: lactate and free fatty acids (FFA), and for blood gases in samples drawn from a catheterized jugular vein. Sampling took place immediately after venipuncture at intervals of 3 min for 1 hr in Simmental oxen (N = 6) and during a period of 24 hr at intervals of 60 min for Zebu (N = 4) and Simmental (N = 6) oxen. After puncture of the vein, plasma FFA and oxygen (pvO2) were elevated for approximately 15 min. All parameters returned to baseline values within 1 hr of the catheter being inserted. Twenty-four-hour mean baseline values for all measured parameters were significantly different (P < or = 0.001) between Zebu and Simmental. All parameters elicited diurnal variations which were mainly related to feed intake. The magnitude of these variations is comparable to the responses to light draught work. It is concluded that a strict standardization of blood sampling, at least in respect of time after feeding, is required for a reliable interpretation of endurance-indicating blood parameters measured under field conditions.
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
An extension of the standard model with a single coupling parameter
NASA Astrophysics Data System (ADS)
Atance, Mario; Cortés, José Luis; Irastorza, Igor G.
1997-02-01
We show that it is possible to find an extension of the matter content of the standard model with a unification of gauge and Yukawa couplings reproducing their known values. The perturbative renormalizability of the model with a single coupling and the requirement to accommodate the known properties of the standard model fix the masses and couplings of the additional particles. The implications on the parameters of the standard model are discussed.
NASA Astrophysics Data System (ADS)
Maslenikov, I.; Useinov, A.; Birykov, A.; Reshetov, V.
2017-10-01
The instrumented indentation method requires the sample surface to be flat and smooth; thus, hardness and elastic modulus values are affected by the roughness. A model that accounts for the isotropic surface roughness and can be used to correct the data in two limiting cases is proposed. Suggested approach requires the surface roughness parameters to be known.
Analysing the 21 cm signal from the epoch of reionization with artificial neural networks
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; Semelin, Benoit
2017-07-01
The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.
A score for the differential diagnosis of bradykinin- and histamine-induced head and neck swellings.
Lenschow, M; Bas, M; Johnson, F; Wirth, M; Strassen, U
2018-05-02
Acute edema of the head and neck region may lead to life-threatening dyspnea and require quick and targeted treatment. They can be subdivided in bradykinin- and histamine-mediated swellings, which require treatment with different classes of pharmaceuticals. Clinical pathways for differential diagnoses do not exist so far, although it is known that early treatment is decisive for faster symptom relief and reduced expression of the swellings. Aim of the study was the creation of a clinical algorithm for identification of bradykinin-mediated angioedema. 188 patients that presented to our outpatient department between 2010 and 2016 with an acute, non-inflammatory swelling of the head and neck region were included in our retrospective study. All available anamnestic and clinical parameters were obtained from patient files. Parameters showing significant differences between the two groups were included in our score. Utilization of the Youden's index allowed determination of an optimal cut-off value. 76 patients could be assigned to the histamine and 112 patients to bradykinin group. The following parameters were included in our score: age, dyspnea, itching or erythema, glucocorticoid response and intake of ACEi/AT-II blockers. The cut-off value is set at three points. The proposed score yielded a sensitivity for identification of bradykinin-mediated angioedema of 96%, a specificity of 84%, a positive predictive value of 91% and a negative predictive value of 93%. Utilization of the proposed score allows quick and reliable assignment of patients to the correct subgroup and thereby reduces time for treatment.
Nurse Scheduling by Cooperative GA with Effective Mutation Operator
NASA Astrophysics Data System (ADS)
Ohki, Makoto
In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
Computational tools for fitting the Hill equation to dose-response curves.
Gadagkar, Sudhindra R; Call, Gerald B
2015-01-01
Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has the option to simulate 500 response values based on the range of values of the dose variable in the original data and the fit of the Hill equation to that data. Copyright © 2014. Published by Elsevier Inc.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operating parameter value and corrective action taken. (6) For each continuous monitoring system, records... operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk using commonly...
A Weight of Evidence Framework for Environmental Assessments: Inferring Quantities
Environmental assessments require the generation of quantitative parameters such as degradation rates and assessment products may be quantities such as criterion values or magnitudes of effects. When multiple data sets or outputs of multiple models are available, it may be appro...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.
Dhandapani, N V; Thangarasu, V S; Sureshkannan, G
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.
Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2005-01-01
A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.
NASA Astrophysics Data System (ADS)
Kuz`michev, V. S.; Filinov, E. P.; Ostapyuk, Ya A.
2018-01-01
This article describes how the thrust level influences the turbojet architecture (types of turbomachines that provide the maximum efficiency) and its working process parameters (turbine inlet temperature (TIT) and overall pressure ratio (OPR)). Functional gasdynamic and strength constraints were included, total mass of fuel and the engine required for mission and the specific fuel consumption (SFC) were considered optimization criteria. Radial and axial turbines and compressors were considered. The results show that as the engine thrust decreases, optimal values of working process parameters decrease too, and the regions of compromise shrink. Optimal engine architecture and values of working process parameters are suggested for turbojets with thrust varying from 100N to 100kN. The results show that for the thrust below 25kN the engine scale factor should be taken into the account, as the low flow rates begin to influence the efficiency of engine elements substantially.
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process
Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267
Kevill, Dennis N.; Koyoshi, Fumie; D’Souza, Malcolm J.
2007-01-01
Additional specific rates of solvolysis are determined for phenyl chloroformate. These values are combined with literature values to give a total of 49 data points, which are used within simple and extended Grunwald-Winstein treatments. Literature values are also brought together to allow treatments in more solvents than previously for three N-aryl-N-methylcarbamoyl chlorides, phenyl chlorothionoformate, phenyl chlorodithioformate, and N,N-diphenylcarbamoyl chloride. For the last two listed, moderately strong evidence for a meaningful inclusion of a term governed by the aromatic ring parameter (I) was indicated. No evidence was found requiring inclusion of this parameter for ionization reactions with only one aromatic ring on the nitrogen of carbamoyl chlorides or for the solvolyses of the chloroformate or chlorothionoformate proceeding by an addition-elimination (association-dissociation) mechanism.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
NASA Astrophysics Data System (ADS)
Mohan, N. S.; Kulkarni, S. M.
2018-01-01
Polymer based composites have marked their valuable presence in the area of aerospace, defense and automotive industry. Components made of composite, are assembled to main structure by fastener, which require accurate, precise high quality holes to be drilled. Drilling the hole in composite with accuracy require control over various processes parameters viz., speed, feed, drill bit size and thickens of specimen. TRIAC VMC machining center is used to drill the hole and to relate the cutting and machining parameters on the torque. MINITAB 14 software is used to analyze the collected data. As a function of cutting and specimen parameters this method could be useful for predicting torque parameters. The purpose of this work is to investigate the effect of drilling parameters to get low torque value. Results show that thickness of specimen and drill bit size are significant parameters influencing the torque and spindle speed and feed rate have least influence and overlaid plot indicates a feasible and low region of torque is observed for medium to large sized drill bits for the range of spindle speed selected. Response surface contour plots indicate the sensitivity of the drill size and specimen thickness to the torque.
Tempel, Zachary J; Gandhoke, Gurpreet S; Bolinger, Bryan D; Khattar, Nicolas K; Parry, Philip V; Chang, Yue-Fang; Okonkwo, David O; Kanter, Adam S
2017-06-01
Annual incidence of symptomatic adjacent level disease (ALD) following lumbar fusion surgery ranges from 0.6% to 3.9% per year. Sagittal malalignment may contribute to the development of ALD. To describe the relationship between pelvic incidence-lumbar lordosis (PI-LL) mismatch and the development of symptomatic ALD requiring revision surgery following single-level transforaminal lumbar interbody fusion for degenerative lumbar spondylosis and/or low-grade spondylolisthesis. All patients who underwent a single-level transforaminal lumbar interbody fusion at either L4/5 or L5/S1 between July 2006 and December 2012 were analyzed for pre- and postoperative spinopelvic parameters. Using univariate and logistic regression analysis, we compared the spinopelvic parameters of those patients who required revision surgery against those patients who did not develop symptomatic ALD. We calculated the predictive value of PI-LL mismatch. One hundred fifty-nine patients met the inclusion criteria. The results noted that, for a 1° increase in PI-LL mismatch (preop and postop), the odds of developing ALD requiring surgery increased by 1.3 and 1.4 fold, respectively, which were statistically significant increases. Based on our analysis, a PI-LL mismatch of >11° had a positive predictive value of 75% for the development of symptomatic ALD requiring revision surgery. A high PI-LL mismatch is strongly associated with the development of symptomatic ALD requiring revision lumbar spine surgery. The development of ALD may represent a global disease process as opposed to a focal condition. Spine surgeons may wish to consider assessment of spinopelvic parameters in the evaluation of degenerative lumbar spine pathology. Copyright © 2017 by the Congress of Neurological Surgeons
Musings on cosmological relaxation and the hierarchy problem
NASA Astrophysics Data System (ADS)
Jaeckel, Joerg; Mehta, Viraf M.; Witkowski, Lukas T.
2016-03-01
Recently Graham, Kaplan and Rajendran proposed cosmological relaxation as a mechanism for generating a hierarchically small Higgs vacuum expectation value. Inspired by this we collect some thoughts on steps towards a solution to the electroweak hierarchy problem and apply them to the original model of cosmological relaxation [Phys. Rev. Lett. 115, 221801 (2015)]. To do so, we study the dynamics of the model and determine the relation between the fundamental input parameters and the electroweak vacuum expectation value. Depending on the input parameters the model exhibits three qualitatively different regimes, two of which allow for hierarchically small Higgs vacuum expectation values. One leads to standard electroweak symmetry breaking whereas in the other regime electroweak symmetry is mainly broken by a Higgs source term. While the latter is not acceptable in a model based on the QCD axion, in non-QCD models this may lead to new and interesting signatures in Higgs observables. Overall, we confirm that cosmological relaxation can successfully give rise to a hierarchically small Higgs vacuum expectation value if (at least) one model parameter is chosen sufficiently small. However, we find that the required level of tuning for achieving this hierarchy in relaxation models can be much more severe than in the Standard Model.
Artificial intelligence in mitral valve analysis.
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention.
Nedorezov, Lev V; Löhr, Bernhard L; Sadykova, Dinara L
2008-10-07
The applicability of discrete mathematical models for the description of diamondback moth (DBM) (Plutella xylostella L.) population dynamics was investigated. The parameter values for several well-known discrete time models (Skellam, Moran-Ricker, Hassell, Maynard Smith-Slatkin, and discrete logistic models) were estimated for an experimental time series from a highland cabbage-growing area in eastern Kenya. For all sets of parameters, boundaries of confidence domains were determined. Maximum calculated birth rates varied between 1.086 and 1.359 when empirical values were used for parameter estimation. After fitting of the models to the empirical trajectory, all birth rate values resulted considerably higher (1.742-3.526). The carrying capacity was determined between 13.0 and 39.9DBM/plant, after fitting of the models these values declined to 6.48-9.3, all values well within the range encountered empirically. The application of the Durbin-Watson criteria for comparison of theoretical and experimental population trajectories produced negative correlations with all models. A test of residual value groupings for randomness showed that their distribution is non-stochastic. In consequence, we conclude that DBM dynamics cannot be explained as a result of intra-population self-regulative mechanisms only (=by any of the models tested) and that more comprehensive models are required for the explanation of DBM population dynamics.
40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...
40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...
40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...
40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...
40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...
40 CFR 1042.115 - Other requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CONTROL OF EMISSIONS FROM NEW AND IN-USE MARINE COMPRESSION-IGNITION ENGINES AND VESSELS Emission... and electronic control modules. If you broadcast a surrogate parameter for torque values, you must... that is necessary for proper operation of the engine. (e) Prohibited controls. You may not design your...
40 CFR 1042.115 - Other requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... CONTROL OF EMISSIONS FROM NEW AND IN-USE MARINE COMPRESSION-IGNITION ENGINES AND VESSELS Emission... and electronic control modules. If you broadcast a surrogate parameter for torque values, you must... that is necessary for proper operation of the engine. (e) Prohibited controls. You may not design your...
40 CFR 1042.115 - Other requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CONTROL OF EMISSIONS FROM NEW AND IN-USE MARINE COMPRESSION-IGNITION ENGINES AND VESSELS Emission... and electronic control modules. If you broadcast a surrogate parameter for torque values, you must... that is necessary for proper operation of the engine. (e) Prohibited controls. You may not design your...
40 CFR 1042.115 - Other requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROL OF EMISSIONS FROM NEW AND IN-USE MARINE COMPRESSION-IGNITION ENGINES AND VESSELS Emission... and electronic control modules. If you broadcast a surrogate parameter for torque values, you must... that is necessary for proper operation of the engine. (e) Prohibited controls. You may not design your...
Robust linear parameter-varying control of blood pressure using vasoactive drugs
NASA Astrophysics Data System (ADS)
Luspay, Tamas; Grigoriadis, Karolos
2015-10-01
Resuscitation of emergency care patients requires fast restoration of blood pressure to a target value to achieve hemodynamic stability and vital organ perfusion. A robust control design methodology is presented in this paper for regulating the blood pressure of hypotensive patients by means of the closed-loop administration of vasoactive drugs. To this end, a dynamic first-order delay model is utilised to describe the vasoactive drug response with varying parameters that represent intra-patient and inter-patient variability. The proposed framework consists of two components: first, an online model parameter estimation is carried out using a multiple-model extended Kalman-filter. Second, the estimated model parameters are used for continuously scheduling a robust linear parameter-varying (LPV) controller. The closed-loop behaviour is characterised by parameter-varying dynamic weights designed to regulate the mean arterial pressure to a target value. Experimental data of blood pressure response of anesthetised pigs to phenylephrine injection are used for validating the LPV blood pressure models. Simulation studies are provided to validate the online model estimation and the LPV blood pressure control using phenylephrine drug injection models representing patients showing sensitive, nominal and insensitive response to the drug.
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
Regression dilution in the proportional hazards model.
Hughes, M D
1993-12-01
The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.
Effect of Biological and Mass Transfer Parameter Uncertainty on N₂O Emission Estimates from WRRFs.
Song, Kang; Harper, Willie F; Takeuchi, Yuki; Hosomi, Masaaki; Terada, Akihiko
2017-07-01
This research used the detailed activated sludge model (ASM) to investigate the effect of parameter uncertainty on nitrous oxide (N2O) emissions from biological wastewater treatment systems. Monte Carlo simulations accounted for uncertainty in the values of the microbial growth parameters and in the volumetric mass transfer coefficient for dissolved oxygen (kLaDO), and the results show that the detailed ASM predicted N2O emission of less than 4% (typically 1%) of the total influent
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
STEWB - Simplified Transient Estimation of the Water Budget
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Simmons, C. S.; Cady, R. E.; Gee, G. W.
2001-12-01
A simplified model describing the transient water budget of a shallow unsaturated soil profile is presented. This model was developed for the U.S. Nuclear Regulatory Commission to provide estimates of the time-varying net infiltration at sites containing residual levels of radioactive materials. Ease of use, computational efficiency, and use of standard parameters and available data were requirements of the model. The model's conceptualization imposes the following simplifications: a uniform soil profile, instantaneous redistribution of infiltrated water, drainage under a unit hydraulic gradient, and no drainage from the soil profile during infiltration. The model's formulation is a revision of that originally presented by Kim et al. [WRR, 32(12):3475-3484, 1996]. Daily meteorological data are required as input. Random durations for precipitation events are generated based on an estimate of the average number of exceedances per year for the specific daily rainfall depth observed. Snow accumulation and melt are described using empirical relationships. During precipitation or snowmelt, runoff is described using an infiltration equation for ponded conditions. When no water is being applied to the profile, evapotranspiration (ET) and drainage occur. The ET rate equals the potential evapotranspiration rate, PET, above a critical value of saturation, SC. Below this critical value, ET = PET*(S/SC)**p, where S is saturation and p is an empirical parameter. Drainage flux from the profile equals the hydraulic conductivity as represented by the Brooks-Corey model. The model has been implemented with an easy-to-use graphical interface and is available at http://nrc-hydro-uncert.pnl.gov/code.htm. Comparison of the model results with lysimeter measurements will be shown, including a 50-year record from the ARS-Coshocton site in Ohio. The interpretation of parameters and the sensitivity of the model to parameter values will be discussed.
Sensitivity Analysis of the Bone Fracture Risk Model
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Comparison of in situ uranium KD values with a laboratory determined surface complexation model
Curtis, G.P.; Fox, P.; Kohler, M.; Davis, J.A.
2004-01-01
Reactive solute transport simulations in groundwater require a large number of parameters to describe hydrologic and chemical reaction processes. Appropriate methods for determining chemical reaction parameters required for reactive solute transport simulations are still under investigation. This work compares U(VI) distribution coefficients (i.e. KD values) measured under field conditions with KD values calculated from a surface complexation model developed in the laboratory. Field studies were conducted in an alluvial aquifer at a former U mill tailings site near the town of Naturita, CO, USA, by suspending approximately 10 g samples of Naturita aquifer background sediments (NABS) in 17-5.1-cm diameter wells for periods of 3 to 15 months. Adsorbed U(VI) on these samples was determined by extraction with a pH 9.45 NaHCO3/Na2CO3 solution. In wells where the chemical conditions in groundwater were nearly constant, adsorbed U concentrations for samples taken after 3 months of exposure to groundwater were indistinguishable from samples taken after 15 months. Measured in situ K D values calculated from the measurements of adsorbed and dissolved U(VI) ranged from 0.50 to 10.6 mL/g and the KD values decreased with increasing groundwater alkalinity, consistent with increased formation of soluble U(VI)-carbonate complexes at higher alkalinities. The in situ K D values were compared with KD values predicted from a surface complexation model (SCM) developed under laboratory conditions in a separate study. A good agreement between the predicted and measured in situ KD values was observed. The demonstration that the laboratory derived SCM can predict U(VI) adsorption in the field provides a critical independent test of a submodel used in a reactive transport model. ?? 2004 Elsevier Ltd. All rights reserved.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.
1991-01-01
The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.
14 CFR 25.1521 - Powerplant limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... propellers are type certificated and do not exceed the values on which compliance with any other requirement... following must be established for reciprocating engine installations: (1) Horsepower or torque, r.p.m...) Any other parameter for which a limitation has been established as part of the engine type certificate...
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Moritzer, E.; Leister, C.
2014-05-01
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help of statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moritzer, E., E-mail: elmar.moritzer@ktp.upb.de; Leister, C., E-mail: elmar.moritzer@ktp.upb.de
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help ofmore » statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.« less
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
NASA Astrophysics Data System (ADS)
Miyake, Shugo; Matsui, Genzou; Ohta, Hiromichi; Hatori, Kimihito; Taguchi, Kohei; Yamamoto, Suguru
2017-07-01
Thermal microscopes are a useful technology to investigate the spatial distribution of the thermal transport properties of various materials. However, for high thermal effusivity materials, the estimated values of thermophysical parameters based on the conventional 1D heat flow model are known to be higher than the values of materials in the literature. Here, we present a new procedure to solve the problem which calculates the theoretical temperature response with the 3D heat flow and measures reference materials which involve known values of thermal effusivity and heat capacity. In general, a complicated numerical iterative method and many thermophysical parameters are required for the calculation in the 3D heat flow model. Here, we devised a simple procedure by using a molybdenum (Mo) thin film with low thermal conductivity on the sample surface, enabling us to measure over a wide thermal effusivity range for various materials.
SODA Repuslive Function Shaping
2017-06-16
SODA, Swarm Orbital Dynamics Advisor, a tool that provides the orbital maneuvers required to achieve a desired type of relative swarm motion. The SODA algorithm uses a repulsive potential that is a function of the distances between each pair of satellites. Choosing the parameters of the function is a swarm design choice, as different values can yield very different maneuvers and thus impact fuel use and mission life. This is an animation illustrating how the peaks of the repulsive potential function vary when varying certain parameters.
Geographic information system/watershed model interface
Fisher, Gary T.
1989-01-01
Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.
NASA Astrophysics Data System (ADS)
Rodny, Marek; Nolz, Reinhard
2017-04-01
Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and separated into positive and negative values. For further interpretation, we calculated daily sums of these values. The respective daily difference (positive or negative) served as independent variable (x) in linear correlation with a selected parameter as dependent variable (y). Quality of correlation was evaluated by means of coefficients of determination (R2). When ET0_lys > ET0_ref, the differences were only weakly correlated with the selected parameters. Hence, the evaluation of the causal processes leading to underestimation of measured hourly ET0 seems to require a more rigorous approach. On the other hand, when ET0_lys < ET0_ref, the differences correlated considerably with the meteorological parameters and related system parameters. Interpreting the particular correlations in detail indicated different (or varying) surface characteristics between the irrigated lysimeter and the nearby (non-irrigated) meteorological station.
Bütof, Rebecca; Hofheinz, Frank; Zöphel, Klaus; Stadelmann, Tobias; Schmollack, Julia; Jentsch, Christina; Löck, Steffen; Kotzerke, Jörg; Baumann, Michael; van den Hoff, Jörg
2015-08-01
Despite ongoing efforts to develop new treatment options, the prognosis for patients with inoperable esophageal carcinoma is still poor and the reliability of individual therapy outcome prediction based on clinical parameters is not convincing. The aim of this work was to investigate whether PET can provide independent prognostic information in such a patient group and whether the tumor-to-blood standardized uptake ratio (SUR) can improve the prognostic value of tracer uptake values. (18)F-FDG PET/CT was performed in 130 consecutive patients (mean age ± SD, 63 ± 11 y; 113 men, 17 women) with newly diagnosed esophageal cancer before definitive radiochemotherapy. In the PET images, the metabolically active tumor volume (MTV) of the primary tumor was delineated with an adaptive threshold method. The blood standardized uptake value (SUV) was determined by manually delineating the aorta in the low-dose CT. SUR values were computed as the ratio of tumor SUV and blood SUV. Uptake values were scan-time-corrected to 60 min after injection. Univariate Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), distant metastases-free survival (DM), and locoregional tumor control (LRC) was performed. Additionally, a multivariate Cox regression including clinically relevant parameters was performed. In multivariate Cox regression with respect to OS, including T stage, N stage, and smoking state, MTV- and SUR-based parameters were significant prognostic factors for OS with similar effect size. Multivariate analysis with respect to DM revealed smoking state, MTV, and all SUR-based parameters as significant prognostic factors. The highest hazard ratios (HRs) were found for scan-time-corrected maximum SUR (HR = 3.9) and mean SUR (HR = 4.4). None of the PET parameters was associated with LRC. Univariate Cox regression with respect to LRC revealed a significant effect only for N stage greater than 0 (P = 0.048). PET provides independent prognostic information for OS and DM but not for LRC in patients with locally advanced esophageal carcinoma treated with definitive radiochemotherapy in addition to clinical parameters. Among the investigated uptake-based parameters, only SUR was an independent prognostic factor for OS and DM. These results suggest that the prognostic value of tracer uptake can be improved when characterized by SUR instead of SUV. Further investigations are required to confirm these preliminary results. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
Designing occupancy studies when false-positive detections occur
Clement, Matthew
2016-01-01
1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.
NASA Technical Reports Server (NTRS)
Usry, J. W.; Whitlock, C. H.
1981-01-01
Management of water resources such as a reservoir requires using analytical models which describe such parameters as the suspended sediment field. To select or develop an appropriate model requires making many measurements to describe the distribution of this parameter in the water column. One potential method for making those measurements expeditiously is to measure light transmission or turbidity and relate that parameter to total suspended solids concentrations. An instrument which may be used for this purpose was calibrated by generating curves of transmission measurements plotted against measured values of total suspended solids concentrations and beam attenuation coefficients. Results of these experiments indicate that field measurements made with this instrument using curves generated in this study should correlate with total suspended solids concentrations and beam attenuation coefficients in the water column within 20 percent.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results within 1% of those obtained by providing spatially specific values for about 800 parcels.
Graphical User Interface for Simulink Integrated Performance Analysis Model
NASA Technical Reports Server (NTRS)
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
Fully Burdened Cost of Energy Analysis: A Model for Marine Corps Systems
2013-03-01
and the lognormal parameters are not used in the creation of the output distribution since they are not required values for a triangular distribution...Army energy security implementation strategy. Washington, DC: Government Printing Office. Bell Helicopter. (n.d.). The Bell AH-1Z Zulu [Image
2009-03-01
Set negative pixel values = 0 (remove bad pixels) -------------- [m,n] = size(data_matrix_new); for i =1:m for j= 1:n if...everything from packaging toothpaste to high speed fluid dynamics. While future engagements will continue to require the development of specialized
40 CFR 1042.840 - Application requirements for remanufactured engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... other basic parameters of the engine's design and emission controls. List the fuel type on which your engines are designed to operate (for example, ultra low-sulfur diesel fuel). List each distinguishable... and the range of values for maximum engine power resulting from production tolerances, as described in...
40 CFR 1042.840 - Application requirements for remanufactured engines.
Code of Federal Regulations, 2012 CFR
2012-07-01
... other basic parameters of the engine's design and emission controls. List the fuel type on which your engines are designed to operate (for example, ultra low-sulfur diesel fuel). List each distinguishable... and the range of values for maximum engine power resulting from production tolerances, as described in...
Filatov, B N; Britanov, N G; Tochilkina, L P; Zhukov, V E; Maslennikov, A A; Ignatenko, M N; Volchek, K
2011-01-01
The threat of industrial chemical accidents and terrorist attacks requires the development of safety regulations for the cleanup of contaminated surfaces. This paper presents principles and a methodology for the development of a new toxicological parameter, "relative value unit" (RVU) as the primary decontamination standard.
Barkauskas, Kestutis J; Rajiah, Prabhakar; Ashwath, Ravi; Hamilton, Jesse I; Chen, Yong; Ma, Dan; Wright, Katherine L; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole
2014-09-11
The standard clinical acquisition for left ventricular functional parameter analysis with cardiovascular magnetic resonance (CMR) uses a multi-breathhold multi-slice segmented balanced SSFP sequence. Performing multiple long breathholds in quick succession for ventricular coverage in the short-axis orientation can lead to fatigue and is challenging in patients with severe cardiac or respiratory disorders. This study combines the encoding efficiency of a six-fold undersampled 3D stack of spirals balanced SSFP sequence with 3D through-time spiral GRAPPA parallel imaging reconstruction. This 3D spiral method requires only one breathhold to collect the dynamic data. Ten healthy volunteers were recruited for imaging at 3 T. The 3D spiral technique was compared against 2D imaging in terms of systolic left ventricular functional parameter values (Bland-Altman plots), total scan time (Welch's t-test) and qualitative image rating scores (Wilcoxon signed-rank test). Systolic left ventricular functional values were not significantly different (i.e. 3D-2D) between the methods. The 95% confidence interval for ejection fraction was -0.1 ± 1.6% (mean ± 1.96*SD). The total scan time for the 3D spiral technique was 48 s, which included one breathhold with an average duration of 14 s for the dynamic scan, plus 34 s to collect the calibration data under free-breathing conditions. The 2D method required an average of 5 min 40s for the same coverage of the left ventricle. The difference between 3D and 2D image rating scores was significantly different from zero (Wilcoxon signed-rank test, p < 0.05); however, the scores were at least 3 (i.e. average) or higher for 3D spiral imaging. The 3D through-time spiral GRAPPA method demonstrated equivalent systolic left ventricular functional parameter values, required significantly less total scan time and yielded acceptable image quality with respect to the 2D segmented multi-breathhold standard in this study. Moreover, the 3D spiral technique used just one breathhold for dynamic imaging, which is anticipated to reduce patient fatigue as part of the complete cardiac examination in future studies that include patients.
Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.
Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier
2015-05-01
Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Analysis of Indoor Environment in Classroom Based on Hygienic Requirements
NASA Astrophysics Data System (ADS)
Javorček, Miroslav; Sternová, Zuzana
2016-06-01
The article contains the analysis of experimental ventilation measurement in selected classrooms of the Elementary School Štrba. Mathematical model of selected classroom was prepared according to in-situ measurements and air exchange was calculated. Interior air temperature and quality influences the students ´ comfort. Evaluated data were compared to requirements of standard (STN EN 15251,2008) applicable to classroom indoor environment during lectures, highlighting the difference between required ambiance quality and actually measured values. CO2 concentration refers to one of the parameters indicating indoor environment quality.
Grosse, Constantino
2014-04-01
The description and interpretation of dielectric spectroscopy data usually require the use of analytical functions, which include unknown parameters that must be determined iteratively by means of a fitting procedure. This is not a trivial task and much effort has been spent to find the best way to accomplish it. While the theoretical approach based on the Levenberg-Marquardt algorithm is well known, no freely available program specifically adapted to the dielectric spectroscopy problem exists to the best of our knowledge. Moreover, even the more general commercial packages usually fail on the following aspects: (1) allow to keep temporarily fixed some of the parameters, (2) allow to freely specify the uncertainty values for each data point, (3) check that parameter values fall within prescribed bounds during the fitting process, and (4) allow to fit either the real, or the imaginary, or simultaneously both parts of the complex permittivity. A program that satisfies all these requirements and allows fitting any superposition of the Debye, Cole-Cole, Cole-Davidson, and Havriliak-Negami dispersions plus a conductivity term to measured dielectric spectroscopy data is presented. It is available on request from the author. Copyright © 2013 Elsevier Inc. All rights reserved.
Noniterative estimation of a nonlinear parameter
NASA Technical Reports Server (NTRS)
Bergstroem, A.
1973-01-01
An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
Comprehensive monitoring of drinking well water quality in Seoul metropolitan city, Korea.
Kim, Ki-Hyun; Susaya, Janice P; Park, Chan Goo; Uhm, Jung-Hoon; Hur, Jin
2013-08-01
In this research, the quality of drinking well waters from 14 districts around Seoul metropolitan city, Korea was assessed by measuring a number of parameters with established guideline (e.g., arsenic, fluoride, nitrate nitrogen, benzene, 1,2-dichloroethene, dichloromethane, copper, and lead) and without such criteria (e.g., hardness, chloride ion, sulfate ion, ammonia nitrogen, aluminum, iron, manganese, and zinc). Physical parameters such as evaporation residue (or total dissolved solids) and turbidity were also measured. The importance of each parameter in well waters was examined in terms of the magnitude and exceedance frequency of guideline values established by international (and national) health agencies. The results of this study indicate that among the eight parameters with well-established guidelines (e.g., WHO), arsenic and lead (guideline value of 0.01 mg L(-1) for both) recorded the highest exceedance frequency of 18 and 16 well samples ranging in 0.06-136 and 2-9 mg L(-1), respectively. As such, a number of water quality parameters measured from many well waters in this urban area were in critical levels which require immediate attention for treatment and continuous monitoring.
A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.
Savitha, R; Suresh, S; Sundararajan, N
2012-08-01
This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Adaptive control based on retrospective cost optimization
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)
2012-01-01
A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.
Artificial Intelligence in Mitral Valve Analysis
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Background: Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. Aim: The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Settings and Design: Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Materials and Methods: Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. Statistical Analysis: A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Results: Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). Conclusion: We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention. PMID:28393769
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Impact of moisture content in AAC on its heat insulation properties
NASA Astrophysics Data System (ADS)
Rubene, S.; Vilnitis, M.
2017-10-01
One of the most popular trends in construction industry is sustainable construction. Therefore, application of construction materials with high insulation characteristics has significantly increased during the past decade. Requirements for application of construction materials with high insulation parameters are required not only by means of energy saving and idea of sustainable construction but also by legislative requirements. Autoclaved aerated concrete (AAC) is a load bearing construction material, which has high heat insulation parameters. However, if the AAC masonry construction has high moisture content the heat insulation properties of the material decrease significantly. This fact lead to the necessity for the on-site control of moisture content in AAC in order to avoid inconsistency between the designed and actual thermal resistivity values of external delimiting constructions. Research of the impact of moisture content in AAC on its heat insulation properties has been presented in this paper.
Detail design of empennage of an unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Sarker, Md. Samad; Panday, Shoyon; Rasel, Md; Salam, Md. Abdus; Faisal, Kh. Md.; Farabi, Tanzimul Hasan
2017-12-01
In order to maintain the operational continuity of air defense systems, unmanned autonomous or remotely controlled unmanned aerial vehicle (UAV) plays a great role as a target for the anti-aircraft weapons. The aerial vehicle must comply with the requirements of high speed, remotely controlled tracking and navigational aids, operational sustainability and sufficient loiter time. It can also be used for aerial reconnaissance, ground surveillance and other intelligence operations. This paper aims to develop a complete tail design of an unmanned aerial vehicle using Systems Engineering approach. The design fulfils the requirements of longitudinal and directional trim, stability and control provided by the horizontal and vertical tail. Tail control surfaces are designed to provide sufficient control of the aircraft in critical conditions. Design parameters obtained from wing design are utilized in the tail design process as required. Through chronological calculations and successive iterations, optimum values of 26 tail design parameters are determined.
The Value of Information in Decision-Analytic Modeling for Malaria Vector Control in East Africa.
Kim, Dohyeong; Brown, Zachary; Anderson, Richard; Mutero, Clifford; Miranda, Marie Lynn; Wiener, Jonathan; Kramer, Randall
2017-02-01
Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced-based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5-21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria-transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs. © 2016 Society for Risk Analysis.
Perandini, Alessio; Perandini, Simone; Montemezzi, Stefania; Bonin, Cecilia; Bellini, Gaia; Bergamini, Valentino
2018-02-01
Deep endometriosis of the rectum is a highly challenging disease, and a surgical approach is often needed to restore anatomy and function. Two kinds of surgeries may be performed: radical with segmental bowel resection or conservative without resection. Most patients undergo magnetic resonance imaging (MRI) before surgery, but there is currently no method to predict if conservative surgery is feasible or whether bowel resection is required. The aim of this study was to create an algorithm that could predict bowel resection using MRI images, that was easy to apply and could be useful in a clinical setting, in order to adequately discuss informed consent with the patient and plan the an appropriate and efficient surgical session. We collected medical records from 2010 to 2016 and reviewed the MRI results of 52 patients to detect any parameters that could predict bowel resection. Parameters that were reproducible and with a significant correlation to radical surgery were investigated by statistical regression and combined in an algorithm to give the best prediction of resection. The calculation of two parameters in MRI, impact angle and lesion size, and their use in a mathematical algorithm permit us to predict bowel resection with a positive predictive value of 87% and a negative predictive value of 83%. MRI could be of value in predicting the need for bowel resection in deep endometriosis of the rectum. Further research is required to assess the possibility of a wider application of this algorithm outside our single-center study. © 2017 Japan Society of Obstetrics and Gynecology.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
PV systems photoelectric parameters determining for field conditions and real operation conditions
NASA Astrophysics Data System (ADS)
Shepovalova, Olga V.
2018-05-01
In this work, research experience and reference documentation have been generalized related to PV systems photoelectric parameters (PV array output parameters) determining. The basic method has been presented that makes it possible to determine photoelectric parameters with the state-of-the-art reliability and repeatability. This method provides an effective tool for PV systems comparison and evaluation of PV system parameters that the end-user will have in the course of its real operation for compliance with those stipulated in reference documentation. The method takes in consideration all parameters that may possibly affect photoelectric performance and that are supported by sufficiently valid procedures for their values testing. Test conditions, requirements for equipment subject to tests and test preparations have been established and the test procedure for fully equipped PV system in field tests and in real operation conditions has been described.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Locate the temperature sensor in a position that provides a representative temperature. (ii) Use a temperature sensor with a measurement sensitivity of 4 degrees Fahrenheit or 0.75 percent of the temperature value, whichever is larger. (iii) Shield the temperature sensor system from electromagnetic interference...
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Locate the temperature sensor in a position that provides a representative temperature. (ii) Use a temperature sensor with a measurement sensitivity of 4 degrees Fahrenheit or 0.75 percent of the temperature value, whichever is larger. (iii) Shield the temperature sensor system from electromagnetic interference...
40 CFR 98.255 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the missing...
40 CFR 98.255 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the missing...
40 CFR 98.255 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the missing...
40 CFR 98.255 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the missing...
40 CFR 98.255 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the missing...
Calculation of the Poisson cumulative distribution function
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
Structure of thermal pair clouds around gamma-ray-emitting black holes
NASA Technical Reports Server (NTRS)
Liang, Edison P.
1991-01-01
Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.
Design optimum frac jobs using virtual intelligence techniques
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam
2000-10-01
Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These networks will be used as the fitness function for a genetic algorithm routine that will search for the best combination of the design parameters for the frac job. The genetic algorithm will search through the entire solution space and identify the optimal combination of parameters to be used in the design process. Considering the complexity of this task this methodology converges relatively fast, providing the engineer with several near-optimum scenarios for the frac job design. These scenarios, which can be achieved in just a minute or two, can be valuable initial points for the engineer to start his/her design job and save him/her hours of runs on the simulator.
NASA Technical Reports Server (NTRS)
Tatnall, Chistopher R.
1998-01-01
The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Nakamura, N; Inaba, Y; Aota, Y; Oba, M; Machida, J; N Aida; Kurosawa, K; Saito, T
2016-12-01
To determine the normal values and usefulness of the C1/4 space available for spinal cord (SAC) ratio and C1 inclination angle, which are new radiological parameters for assessing atlantoaxial instability in children with Down syndrome. We recruited 272 children with Down syndrome (including 14 who underwent surgical treatment), and 141 children in the control group. All were aged between two and 11 years. The C1/4 SAC ratio, C1 inclination angle, atlas-dens interval (ADI), and SAC were measured in those with Down syndrome, and the C1/4 SAC ratio and C1 inclination angle were measured in the control group. The mean C1/4 SAC ratio in those requiring surgery with Down syndrome, those with Down syndrome not requiring surgery and controls were 0.63 (standard deviation (sd) 0.1), 1.15 (sd 0.13) and 1.29 (sd 0.14), respectively, and the mean C1 inclination angles were -3.1° (sd 10.7°), 15.8° (sd 7.3) and 17.2° (sd 7.3), in these three groups, respectively. The mean ADI and SAC in those with Down syndrome requiring surgery and those with Down syndrome not requiring surgery were 9.8 mm (sd 2.8) and 4.3 mm (sd 1.0), and 11.1 mm (sd 2.6) and 18.5 mm (sd 2.4), respectively. The normal values of the C1/4 SAC ratio and the C1 inclination angle were found to be about 1.2° and 15º, respectively. Cite this article: Bone Joint J 2016;98-B:1704-10. ©2016 The British Editorial Society of Bone & Joint Surgery.
Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Effect of electron-hole asymmetry on optical conductivity in 8 -P m m n borophene
NASA Astrophysics Data System (ADS)
Verma, Sonu; Mawrie, Alestin; Ghosh, Tarun Kanti
2017-10-01
We present a detailed theoretical study of the Drude weight and optical conductivity of 8-P m m n borophene having tilted anisotropic Dirac cones. We provide exact analytical expressions of x x and y y components of the Drude weight as well as maximum optical conductivity. We also obtain exact analytical expressions of the minimum energy (ɛ1) required to trigger the optical transitions and energy (ɛ2) needed to attain maximum optical conductivity. We find that the Drude weight and optical conductivity are highly anisotropic as a consequence of the anisotropic Dirac cone. The optical conductivities have a nonmonotonic behavior with photon energy in the regime between ɛ1 and ɛ2, as a result of the tilted parameter vt. The tilted parameter can be extracted by knowing ɛ1 and ɛ2 from optical measurements. The maximum values of the components of the optical conductivity do not depend on the carrier density and the tilted parameter. The product of the maximum values of the anisotropic conductivities has the universal value (e2/4ℏ ) 2. The tilted anisotropic Dirac cones in 8-P m m n borophene can be realized by the optical conductivity measurement.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Structural Analysis of Cubane-Type Iron Clusters
Tan, Lay Ling; Holm, R. H.; Lee, Sonny C.
2013-01-01
The generalized cluster type [M4(μ3-Q)4Ln]x contains the cubane-type [M4Q4]z core unit that can approach, but typically deviates from, perfect Td symmetry. The geometric properties of this structure have been analyzed with reference to Td symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M4, Q4) and the total [M4Q4] core (as a tetracapped M4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe4Q4] (Q = S2−, Se2−, Te2−, [NPR3]−, [NR]2−) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information. PMID:24072952
Structural Analysis of Cubane-Type Iron Clusters.
Tan, Lay Ling; Holm, R H; Lee, Sonny C
2013-07-13
The generalized cluster type [M 4 (μ 3 -Q) 4 L n ] x contains the cubane-type [M 4 Q 4 ] z core unit that can approach, but typically deviates from, perfect T d symmetry. The geometric properties of this structure have been analyzed with reference to T d symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M 4 , Q 4 ) and the total [M 4 Q 4 ] core (as a tetracapped M 4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe 4 Q 4 ] (Q = S 2- , Se 2- , Te 2- , [NPR 3 ] - , [NR] 2- ) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information.
Compression for an effective management of telemetry data
NASA Technical Reports Server (NTRS)
Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.
1993-01-01
A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Raidl, A.
Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
Technical note: Design flood under hydrological uncertainty
NASA Astrophysics Data System (ADS)
Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco
2017-07-01
Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.
Shape optimization of the modular press body
NASA Astrophysics Data System (ADS)
Pabiszczak, Stanisław
2016-12-01
A paper contains an optimization algorithm of cross-sectional dimensions of a modular press body for the minimum mass criterion. Parameters of the wall thickness and the angle of their inclination relative to the base of section are assumed as the decision variables. The overall dimensions are treated as a constant. The optimal values of parameters were calculated using numerical method of the tool Solver in the program Microsoft Excel. The results of the optimization procedure helped reduce body weight by 27% while maintaining the required rigidity of the body.
Electrochemical energy storage subsystems study, volume 1
NASA Technical Reports Server (NTRS)
Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.
1981-01-01
The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models define baseline designs and costs. The major design and performance parameters are each varied to determine their influence on LCC around the baseline values.
Electrochemical Energy Storage Subsystems Study, Volume 2
NASA Technical Reports Server (NTRS)
Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.
1981-01-01
The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models are exercised to define baseline designs and costs. Then the major design and performance parameters are each varied to determine their influence on LCC around the baseline values.
Decreasing Kd uncertainties through the application of thermodynamic sorption models.
Domènech, Cristina; García, David; Pękala, Marek
2015-09-15
Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.
Nelson, Stacy; English, Shawn; Briggs, Timothy
2016-05-06
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Robust determination of surface relaxivity from nuclear magnetic resonance DT2 measurements
NASA Astrophysics Data System (ADS)
Luo, Zhi-Xiang; Paulsen, Jeffrey; Song, Yi-Qiao
2015-10-01
Nuclear magnetic resonance (NMR) is a powerful tool to probe into geological materials such as hydrocarbon reservoir rocks and groundwater aquifers. It is unique in its ability to obtain in situ the fluid type and the pore size distributions (PSD). The T1 and T2 relaxation times are closely related to the pore geometry through the parameter called surface relaxivity. This parameter is critical for converting the relaxation time distribution into the PSD and so is key to accurately predicting permeability. The conventional way to determine the surface relaxivity ρ2 had required independent laboratory measurements of the pore size. Recently Zielinski et al. proposed a restricted diffusion model to extract the surface relaxivity from the NMR diffusion-T2 relaxation (DT2) measurement. Although this method significantly improved the ability to directly extract surface relaxivity from a pure NMR measurement, there are inconsistencies with their model and it relies on a number of preset parameters. Here we propose an improved signal model to incorporate a scalable LT and extend their method to extract the surface relaxivity based on analyzing multiple DT2 maps with varied diffusion observation time. With multiple diffusion observation times, the apparent diffusion coefficient correctly describes the restricted diffusion behavior in samples with wide PSDs, and the new method does not require predetermined parameters, such as the bulk diffusion coefficient and tortuosity. Laboratory experiments on glass beads packs with the beads diameter ranging from 50 μm to 500 μm are used to validate the new method. The extracted diffusion parameters are consistent with their known values and the determined surface relaxivity ρ2 agrees with the expected value within ±7%. This method is further successfully applied on a Berea sandstone core and yields surface relaxivity ρ2 consistent with the literature.
NASA Technical Reports Server (NTRS)
Weatherford, Charles A.
1993-01-01
One version of the multichannel theory for electron-target scattering based on the Schwinger variational principle, the SMC method, requires the introduction of a projection parameter. The role of the projection parameter a is investigated and it is shown that the principal-value operator in the SMC equation is Hermitian regardless of the value of a as long as it is real and nonzero. In a basis that is properly orthonormalizable, the matrix representation of this operator is also Hermitian. The use of such basis is consistent with the Schwinger variational principle because the Lippmann-Schwinger equation automatically builds in the correct boundary conditions. Otherwise, an auxiliary condition needs to be introduced, and Takatsuka and McKoy's original value of a is one of the three possible ways to achieve Hermiticity. In all cases but one, a can be uncoupled from the Hermiticity condition and becomes a free parameter. An equation for a based on the variational stability of the scattering amplitude is derived; its solution has an interesting property that the scattering amplitude from a converged SMC calculation is independent of the choice of a even though the SMC operator itself is a-dependent. This property provides a sensitive test of the convergence of the calculation. For a static-exchange calculation, the convergence requirement only depends on the completeness of the one-electron basis, but for a general multichannel case, the a-invariance in the scattering amplitude requires both the one-electron basis and the N plus 1-electron basis to be complete. The role of a in the SMC equation and the convergence property are illustrated using two examples: e-CO elastic scattering in the static-exchange approximation, and a two-state treatment of the e-H2 Chi(sup 1)Sigma(sub g)(+) yields b(sup 3)Sigma(sub u)(+) excitation.
TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS
NASA Technical Reports Server (NTRS)
Hops, J. M.
1994-01-01
A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
Han, Xu; Suo, Shiteng; Sun, Yawen; Zu, Jinyan; Qu, Jianxun; Zhou, Yan; Chen, Zengai; Xu, Jianrong
2017-03-01
To compare four methods of region-of-interest (ROI) placement for apparent diffusion coefficient (ADC) measurements in distinguishing low-grade gliomas (LGGs) from high-grade gliomas (HGGs). Two independent readers measured ADC parameters using four ROI methods (single-slice [single-round, five-round and freehand] and whole-volume) on 43 patients (20 LGGs, 23 HGGs) who had undergone 3.0 Tesla diffusion-weighted imaging and time required for each method of ADC measurements was recorded. Intraclass correlation coefficients (ICCs) were used to assess interobserver variability of ADC measurements. Mean and minimum ADC values and time required were compared using paired Student's t-tests. All ADC parameters (mean/minimum ADC values of three single-slice methods, mean/minimum/standard deviation/skewness/kurtosis/the10 th and 25 th percentiles/median/maximum of whole-volume method) were correlated with tumor grade (low versus high) by unpaired Student's t-tests. Discriminative ability was determined by receiver operating characteristic curves. All ADC measurements except minimum, skewness, and kurtosis of whole-volume ROI differed significantly between LGGs and HGGs (all P < 0.05). Mean ADC value of single-round ROI had the highest effect size (0.72) and the greatest areas under the curve (0.872). Three single-slice methods had good to excellent ICCs (0.67-0.89) and the whole-volume method fair to excellent ICCs (0.32-0.96). Minimum ADC values differed significantly between whole-volume and single-round ROI (P = 0.003) and, between whole-volume and five-round ROI (P = 0.001). The whole-volume method took significantly longer than all single-slice methods (all P < 0.001). ADC measurements are influenced by ROI determination methods. Whole-volume histogram analysis did not yield better results than single-slice methods and took longer. Mean ADC value derived from single-round ROI is the most optimal parameter for differentiating LGGs from HGGs. 3 J. Magn. Reson. Imaging 2017;45:722-730. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Rudowicz, C.
2000-06-01
Electron magnetic resonance (EMR) studies of paramagnetic species with the spin S ≥ 1 at orthorhombic symmetry sites require an axial zero-field splitting (ZFS) parameter and a rhombic one of the second order (k = 2), whereas at triclinic sites all five ZFS (k = 2) parameters are expressed in the crystallographic axis system. For the spin S ≥ 2 also the higher-order ZFS terms must be considered. In the principal axis system, instead of the five ZFS (k = 2) parameters, the two principal ZFS values can be used, as for orthorhombic symmetry; however, then the orientation of the principal axes with respect to the crystallographic axis system must be provided. Recently three serious cases of incorrect relations between the extended Stevens ZFS parameters and the conventional ones have been identified in the literature. The first case concerns a controversy concerning the second-order rhombic ZFS parameters and was found to have lead to misinterpretation, in a review article, of several values of either E or b22 published earlier. The second case concerns the set of five relations between the extended Stevens ZFS parameters bkq and the conventional ones Dij for triclinic symmetry, four of which turn out to be incorrect. The third case concerns the omission of the scaling factors fk for the extended Stevens ZFS parameters bkq. In all cases the incorrect relations in question have been published in spite of the earlier existence of the correct relations in the literature. The incorrect relations are likely to lead to further misinterpretation of the published values of the ZFS parameters for orthorhombic and lower symmetry. The purpose of this paper is to make the spectroscopists working in the area of EMR (including EPR and ESR) and related spectroscopies aware of the problem and to reduce proliferation of the incorrect relations.
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
Eye aberration analysis with Zernike polynomials
NASA Astrophysics Data System (ADS)
Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.
1998-06-01
New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
Theoretical study of the hyperfine parameters of OH
NASA Technical Reports Server (NTRS)
Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.
1991-01-01
In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.
Angular radiation models for Earth-atmosphere system. Volume 1: Shortwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Minnis, P.; Smith, G. L.; Staylor, W. F.; Wielicki, B. A.; Walker, I. J.; Young, D. F.; Taylor, V. R.; Stowe, L. L.
1988-01-01
Presented are shortwave angular radiation models which are required for analysis of satellite measurements of Earth radiation, such as those fro the Earth Radiation Budget Experiment (ERBE). The models consist of both bidirectional and directional parameters. The bidirectional parameters are anisotropic function, standard deviation of mean radiance, and shortwave-longwave radiance correlation coefficient. The directional parameters are mean albedo as a function of Sun zenith angle and mean albedo normalized to overhead Sun. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) and Geostationary Operational Environmental Satellite (GOES) data sets is described. Tabulated values and computer-generated plots are included for the bidirectional and directional modes.
Ochi, Takehiro; Yamada, Azusa; Naganuma, Yuki; Nishina, Noriko; Koyama, Hironari
2016-06-01
To determine the effect of long-distance (approximately 600 km) road transportation on the blood biochemistry of laboratory animals, we investigated the changes in serum biochemical parameters in healthy cynomolgus monkeys and beagle dogs transported by truck from Osaka to Tsukuba, Japan. The concentrations of serum cortisol, total bilirubin and aspartate aminotransferase in monkeys increased during transportation. Serum cortisol and total bilirubin levels in dogs also increased during transportation, but serum triglyceride decreased. Serum parameter values in truck-transported monkeys and dogs returned to baseline levels within two weeks following arrival. Taken together, these results suggest that a two-week acclimation period is the minimum duration required for adaptation following road transportation.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, K. A.; Hay, L.; Markstrom, S. L.
2014-12-01
The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We study the cosmological predictions of two recently proposed non-local modifications of General Relativity. Both models have the same number of parameters as ΛCDM, with a mass parameter m replacing the cosmological constant. We implement the cosmological perturbations of the non-local models into a modification of the CLASS Boltzmann code, and we make a full comparison to CMB, BAO and supernova data. We find that the non-local models fit these datasets very well, at the same level as ΛCDM. Among the vast literature on modified gravity models, this is, to our knowledge, the only example which fits data as wellmore » as ΛCDM without requiring any additional parameter. For both non-local models parameter estimation using Planck +JLA+BAO data gives a value of H{sub 0} slightly higher than in ΛCDM.« less
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Bülow, Thomas; Opfer, Roland; Kabus, Sven; Dharaiya, Ekta
2008-03-01
We present an effective and intuitive visualization of the macro-vasculature of a selected nodule or tumor in three-dimensional image data (e.g. CT, MR, US). For the differential diagnosis of nodules the possible distortion of adjacent vessels is one important clinical criterion. Surface renderings of vessel- and tumor-segmentations depend critically on the chosen parameter- and threshold-values for the underlying segmentation. Therefore we use rotating Maximum Intensity Projections (MIPs) of a volume of interests (VOI) around the selected tumor. The MIP does not require specific parameters, and allows much quicker visual inspection in comparison to slicewise navigation, while the rotation gives depth cues to the viewer. Of the vessel network within the VOI, however, not all vessels are connected to the selected tumor, and it is tedious to sort out which adjacent vessels are in fact connected and which are overlaid only by projection. Therefore we suggest a simple transformation of the original image values into connectivity values. In the derived connectedness-image each voxel value corresponds to the lowest image value encountered on the highest possible pathway from the tumor to the voxel. The advantage of the visualization is that no implicit binary decision is made whether a certain vessel is connected to the tumor or not, but rather the degree of connectedness is visualized as the brightness of the vessel. Non-connected structures disappear, feebly connected structures appear faint, and strongly connected structures remain in their original brightness. The visualization does not depend on delicate threshold values. Promising results have been achieved for pulmonary nodules in CT.
Thermalization threshold in models of 1D fermions
NASA Astrophysics Data System (ADS)
Mukerjee, Subroto; Modak, Ranjan; Ramswamy, Sriram
2013-03-01
The question of how isolated quantum systems thermalize is an interesting and open one. In this study we equate thermalization with non-integrability to try to answer this question. In particular, we study the effect of system size on the integrability of 1D systems of interacting fermions on a lattice. We find that for a finite-sized system, a non-zero value of an integrability breaking parameter is required to make an integrable system appear non-integrable. Using exact diagonalization and diagnostics such as energy level statistics and the Drude weight, we find that the threshold value of the integrability breaking parameter scales to zero as a power law with system size. We find the exponent to be the same for different models with its value depending on the random matrix ensemble describing the non-integrable system. We also study a simple analytical model of a non-integrable system with an integrable limit to better understand how a power law emerges.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Effects of molecular and particle scatterings on the model parameter for remote-sensing reflectance.
Lee, ZhongPing; Carder, Kendall L; Du, KePing
2004-09-01
For optically deep waters, remote-sensing reflectance (r(rs)) is traditionally expressed as the ratio of the backscattering coefficient (b(b)) to the sum of absorption and backscattering coefficients (a + b(b)) that multiples a model parameter (g, or the so-called f'/Q). Parameter g is further expressed as a function of b(b)/(a + b(b)) (or b(b)/a) to account for its variation that is due to multiple scattering. With such an approach, the same g value will be derived for different a and b(b) values that provide the same ratio. Because g is partially a measure of the angular distribution of upwelling light, and the angular distribution from molecular scattering is quite different from that of particle scattering; g values are expected to vary with different scattering distributions even if the b(b)/a ratios are the same. In this study, after numerically demonstrating the effects of molecular and particle scatterings on the values of g, an innovative r(rs) model is developed. This new model expresses r(rs) in two separate terms: one governed by the phase function of molecular scattering and one governed by the phase function of particle scattering, with a model parameter introduced for each term. In this way the phase function effects from molecular and particle scatterings are explicitly separated and accounted for. This new model provides an analytical tool to understand and quantify the phase-function effects on r(rs), and a platform to calculate r(rs) spectrum quickly and accurately that is required for remote-sensing applications.
Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo
2014-11-01
One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
Evaluation of weather-based rice yield models in India.
Sudharsan, D; Adinarayana, J; Reddy, D Raji; Sreenivas, G; Ninomiya, S; Hirafuji, M; Kiura, T; Tanaka, K; Desai, U B; Merchant, S N
2013-01-01
The objective of this study was to compare two different rice simulation models--standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])--with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.
Lucato, Jeanette Janaina Jaber; Cunha, Thiago Marraccini Nogueira da; Reis, Aline Mela Dos; Picanço, Patricia Salerno de Almeida; Barbosa, Renata Cléia Claudino; Liberali, Joyce; Righetti, Renato Fraga
2017-01-01
To evaluate the possible changes in tidal volume, minute volume and respiratory rate caused by the use of a heat and moisture exchanger in patients receiving pressure support mechanical ventilation and to quantify the variation in pressure support required to compensate for the effect caused by the heat and moisture exchanger. Patients under invasive mechanical ventilation in pressure support mode were evaluated using heated humidifiers and heat and moisture exchangers. If the volume found using the heat and moisture exchangers was lower than that found with the heated humidifier, an increase in pressure support was initiated during the use of the heat and moisture exchanger until a pressure support value was obtained that enabled the patient to generate a value close to the initial tidal volume obtained with the heated humidifier. The analysis was performed by means of the paired t test, and incremental values were expressed as percentages of increase required. A total of 26 patients were evaluated. The use of heat and moisture exchangers increased the respiratory rate and reduced the tidal and minute volumes compared with the use of the heated humidifier. Patients required a 38.13% increase in pressure support to maintain previous volumes when using the heat and moisture exchanger. The heat and moisture exchanger changed the tidal and minute volumes and respiratory rate parameters. Pressure support was increased to compensate for these changes.
Lucato, Jeanette Janaina Jaber; da Cunha, Thiago Marraccini Nogueira; dos Reis, Aline Mela; Picanço, Patricia Salerno de Almeida; Barbosa, Renata Cléia Claudino; Liberali, Joyce; Righetti, Renato Fraga
2017-01-01
Objective To evaluate the possible changes in tidal volume, minute volume and respiratory rate caused by the use of a heat and moisture exchanger in patients receiving pressure support mechanical ventilation and to quantify the variation in pressure support required to compensate for the effect caused by the heat and moisture exchanger. Methods Patients under invasive mechanical ventilation in pressure support mode were evaluated using heated humidifiers and heat and moisture exchangers. If the volume found using the heat and moisture exchangers was lower than that found with the heated humidifier, an increase in pressure support was initiated during the use of the heat and moisture exchanger until a pressure support value was obtained that enabled the patient to generate a value close to the initial tidal volume obtained with the heated humidifier. The analysis was performed by means of the paired t test, and incremental values were expressed as percentages of increase required. Results A total of 26 patients were evaluated. The use of heat and moisture exchangers increased the respiratory rate and reduced the tidal and minute volumes compared with the use of the heated humidifier. Patients required a 38.13% increase in pressure support to maintain previous volumes when using the heat and moisture exchanger. Conclusion The heat and moisture exchanger changed the tidal and minute volumes and respiratory rate parameters. Pressure support was increased to compensate for these changes. PMID:28977257
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy; English, Shawn; Briggs, Timothy
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL
NASA Technical Reports Server (NTRS)
Johnson, D.
1994-01-01
The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.
Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu
2012-05-01
Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, G. Tim; Hartman, Larry; Stagich, Brooke
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, T.; Stagich, B.
Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images.
Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D DiFranco, Matthew; Opposits, Gabor; K Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo
2016-01-01
Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25-30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians.
A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images
Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D. DiFranco, Matthew; Opposits, Gabor; K. Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo
2016-01-01
Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25–30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians. PMID:27736888
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, M.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.
Element distinctness revisited
NASA Astrophysics Data System (ADS)
Portugal, Renato
2018-07-01
The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x_1,\\ldots ,x_N) is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N^{2/3}) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis' seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007). The first critical parameter is the number of repetitions of the algorithm's main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm's success probability is 1-O(N^{1/(k+1)}), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm.
Thin Film Heat Flux Sensors: Design and Methodology
NASA Technical Reports Server (NTRS)
Fralick, Gustave C.; Wrbanek, John D.
2013-01-01
Thin Film Heat Flux Sensors: Design and Methodology: (1) Heat flux is one of a number of parameters, together with pressure, temperature, flow, etc. of interest to engine designers and fluid dynamists, (2) The measurement of heat flux is of interest in directly determining the cooling requirements of hot section blades and vanes, and (3)In addition, if the surface and gas temperatures are known, the measurement of heat flux provides a value for the convective heat transfer coefficient that can be compared with the value provided by CFD codes.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Reliability of diabetic patients' gait parameters in a challenging environment.
Allet, L; Armand, S; de Bie, R A; Golay, A; Monnin, D; Aminian, K; de Bruin, E D
2008-11-01
Activities of daily life require us to move about in challenging environments and to walk on varied surfaces. Irregular terrain has been shown to influence gait parameters, especially in a population at risk for falling. A precise portable measurement system would permit objective gait analysis under such conditions. The aims of this study are to (a) investigate the reliability of gait parameters measured with the Physilog in diabetic patients walking on different surfaces (tar, grass, and stones); (b) identify the measurement error (precision); (c) identify the minimal clinical detectable change. 16 patients with Type 2 diabetes were measured twice within 8 days. After clinical examination patients walked, equipped with a Physilog, on the three aforementioned surfaces. ICC for each surface was excellent for within-visit analyses (>0.938). Inter-visit ICC's (0.753) were excellent except for the knee range parameter (>0.503). The coefficient of variation (CV) was lower than 5% for most of the parameters. Bland and Altman Plots, SEM and SDC showed precise values, distributed around zero for all surfaces. Good reliability of Physilog measurements on different surfaces suggests that Physilog could facilitate the study of diabetic patients' gait in conditions close to real-life situations. Gait parameters during complex locomotor activities (e.g. stair-climbing, curbs, slopes) have not yet been extensively investigated. Good reliability, small measurement error and values of minimal clinical detectable change recommend the utilization of Physilog for the evaluation of gait parameters in diabetic patients.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Yamada, Kentaro; Abe, Yuichiro; Satoh, Shigenobu; Yanagibashi, Yasushi; Hyakumachi, Takahiko; Masuda, Takeshi
2017-08-01
No previous studies have reported the radiological features of patients requiring surgery in symptomatic lumbar foraminal stenosis (LFS). This study aims to investigate the diagnostic accuracy of a novel technique, foraminal stenotic ratio (FSR), using three-dimensional magnetic resonance imaging for LFS at L5-S by comparing patients requiring surgery, patients with successful conservative treatment, and asymptomatic patients. This is a retrospective radiological comparative study. We assessed the magnetic resonance imaging (MRI) results of 84 patients (168 L5-S foramina) aged ≥40 years without L4-L5 lumbar spinal stenosis. The foramina were divided into three groups following standardized treatment: stenosis requiring surgery (20 foramina), stenosis with successful conservative treatment (26 foramina), and asymptomatic stenotic foramen (122 foramina). Foraminal stenotic ratio was defined as the ratio of the length of the stenosis to the length of the foramen on the reconstructed oblique coronal image, referring to perineural fat obliterations in whole oblique sagittal images. We also evaluated the foraminal nerve angle and the minimum nerve diameter on reconstructed images, and the Lee classification on conventional T1 images. The differences in each MRI parameter between the groups were investigated. To predict which patients require surgery, receiver operating characteristic (ROC) curves were plotted after calculating the area under the ROC curve. The FSR showed a stepwise increase when comparing asymptomatic, conservative, and surgical groups (mean, 8.6%, 38.5%, 54.9%, respectively). Only FSR was significantly different between the surgical and conservative groups (p=.002), whereas all parameters were significantly different comparing the symptomatic and asymptomatic groups. The ROC curve showed that the area under the curve for FSR was 0.742, and the optimal cutoff value for FSR for predicting a surgical requirement in symptomatic patients was 50% (sensitivity, 75%; specificity, 80.7%). The FSR determined LFS requiring surgery among symptomatic patients, with moderate accuracy. Foramina occupied ≥50% by fat obliteration were likely to fail conservative treatment, with a positive predictive value of 75%. Copyright © 2017 Elsevier Inc. All rights reserved.
Determination of the Characteristic Values and Variation Ratio for Sensitive Soils
NASA Astrophysics Data System (ADS)
Milutinovici, Emilia; Mihailescu, Daniel
2017-12-01
In 2008, Romania adopted Eurocode 7, part II, regarding the geotechnical investigations - called SR EN1997-2/2008. However a previous standard already existed in Romania, by using the mathematical statistics in determination of the calculation values, the requirements of Eurocode can be taken into consideration. The setting of characteristics and calculations values of the geotechnical parameters was finally issued in Romania at the end of 2010 at standard NP122-2010 - “Norm regarding determination of the characteristic and calculation values of the geotechnical parameters”. This standard allows using of data already known from analysed area and setting the calculation values of geotechnical parameters. However, this possibility exist, it is not performed easy in Romania, considering that there isn’t any centralized system of information coming from the geotechnical studies performed for various objectives of private or national interests. Every company performing geotechnical studies tries to organize its own data base, but unfortunately none of them use existing centralized data. When determining the values of calculation, an important role is played by the variation ratio of the characteristic values of a geotechnical parameter. There are recommendations in the mentioned Norm, that could be taken into account, regarding the limits of the variation ratio, but these values are mentioned for Quaternary age soils only, normally consolidated, with a content of organic material < 5%. All of the difficult soils are excluded from the Norm even if they exist and affect the construction foundations on more than a half of the Romania’s surface. A type of difficult soil, extremely widespread on the Romania’s territory, is the contractile soil (with high swelling and contractions, very sensitive to the seasonal moisture variations). This type of material covers and influences the construction foundations in one over third of Romania’s territory. This work is proposing to be a step in determination of limits of the variation ratios for the contractile soils category, for the most used geotechnical parameters in the Romanian engineering practice, namely: the index of consistency and the cohesion.
Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.
1994-01-01
The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.
Pope, N.G.; Veirs, D.K.; Claytor, T.N.
1994-10-25
The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.
Influence of Contact Angle, Growth Angle and Melt Surface Tension on Detached Solidification of InSb
NASA Technical Reports Server (NTRS)
Wang, Yazhen; Regel, Liya L.; Wilcox, William R.
2000-01-01
We extended the previous analysis of detached solidification of InSb based on the moving meniscus model. We found that for steady detached solidification to occur in a sealed ampoule in zero gravity, it is necessary for the growth angle to exceed a critical value, the contact angle for the melt on the ampoule wall to exceed a critical value, and the melt-gas surface tension to be below a critical value. These critical values would depend on the material properties and the growth parameters. For the conditions examined here, the sum of the growth angle and the contact angle must exceed approximately 130, which is significantly less than required if both ends of the ampoule are open.
Thornley, John H. M.
2011-01-01
Background and Aims Plant growth and respiration still has unresolved issues, examined here using a model. The aims of this work are to compare the model's predictions with McCree's observation-based respiration equation which led to the ‘growth respiration/maintenance respiration paradigm’ (GMRP) – this is required to give the model credibility; to clarify the nature of maintenance respiration (MR) using a model which does not represent MR explicitly; and to examine algebraic and numerical predictions for the respiration:photosynthesis ratio. Methods A two-state variable growth model is constructed, with structure and substrate, applicable on plant to ecosystem scales. Four processes are represented: photosynthesis, growth with growth respiration (GR), senescence giving a flux towards litter, and a recycling of some of this flux. There are four significant parameters: growth efficiency, rate constants for substrate utilization and structure senescence, and fraction of structure returned to the substrate pool. Key Results The model can simulate McCree's data on respiration, providing an alternative interpretation to the GMRP. The model's parameters are related to parameters used in this paradigm. MR is defined and calculated in terms of the model's parameters in two ways: first during exponential growth at zero growth rate; and secondly at equilibrium. The approaches concur. The equilibrium respiration:photosynthesis ratio has the value of 0·4, depending only on growth efficiency and recycling fraction. Conclusions McCree's equation is an approximation that the model can describe; it is mistaken to interpret his second coefficient as a maintenance requirement. An MR rate is defined and extracted algebraically from the model. MR as a specific process is not required and may be replaced with an approach from which an MR rate emerges. The model suggests that the respiration:photosynthesis ratio is conservative because it depends on two parameters only whose values are likely to be similar across ecosystems. PMID:21948663
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Davidson, P; Bigerelle, M; Bounichane, B; Giazzon, M; Anselme, K
2010-07-01
Contact guidance is generally evaluated by measuring the orientation angle of cells. However, statistical analyses are rarely performed on these parameters. Here we propose a statistical analysis based on a new parameter sigma, the orientation parameter, defined as the dispersion of the distribution of orientation angles. This parameter can be used to obtain a truncated Gaussian distribution that models the distribution of the data between -90 degrees and +90 degrees. We established a threshold value of the orientation parameter below which the data can be considered to be aligned within a 95% confidence interval. Applying our orientation parameter to cells on grooves and using a modelling approach, we established the relationship sigma=alpha(meas)+(52 degrees -alpha(meas))/(1+C(GDE)R) where the parameter C(GDE) represents the sensitivity of cells to groove depth, and R the groove depth. The values of C(GDE) obtained allowed us to compare the contact guidance of human osteoprogenitor (HOP) cells across experiments involving different groove depths, times in culture and inoculation densities. We demonstrate that HOP cells are able to identify and respond to the presence of grooves 30, 100, 200 and 500 nm deep and that the deeper the grooves, the higher the cell orientation. The evolution of the sensitivity (C(GDE)) with culture time is roughly sigmoidal with an asymptote, which is a function of inoculation density. The sigma parameter defined here is a universal parameter that can be applied to all orientation measurements and does not require a mathematical background or knowledge of directional statistics. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
A correlation to estimate the velocity of convective currents in boilover.
Ferrero, Fabio; Kozanoglu, Bulent; Arnaldos, Josep
2007-05-08
The mathematical model proposed by Kozanoglu et al. [B. Kozanoglu, F. Ferrero, M. Muñoz, J. Arnaldos, J. Casal, Velocity of the convective currents in boilover, Chem. Eng. Sci. 61 (8) (2006) 2550-2556] for simulating heat transfer in hydrocarbon mixtures in the process that leads to boilover requires the initial value of the convective current's velocity through the fuel layer as an adjustable parameter. Here, a correlation for predicting this parameter based on the properties of the fuel (average ebullition temperature) and the initial thickness of the fuel layer is proposed.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Borole, Abhijeet P.
2015-08-25
Conversion of biomass into bioenergy is possible via multiple pathways resulting in production of biofuels, bioproducts and biopower. Efficient and sustainable conversion of biomass, however, requires consideration of many environmental and societal parameters in order to minimize negative impacts. Integration of multiple conversion technologies and inclusion of upcoming alternatives such as bioelectrochemical systems can minimize these impacts and improve conservation of resources such as hydrogen, water and nutrients via recycle and reuse. This report outlines alternate pathways integrating microbial electrolysis in biorefinery schemes to improve energy efficiency while evaluating environmental sustainability parameters.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Markstrom, Steven; Hay, Lauren
2015-04-01
The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
Parameters for the Operation of Bacterial Thiosalt Oxidation Ponds
Silver, M.
1985-01-01
Shake flask and pH-controlled reactor tests were used to determine the mathematical parameters for a mixed-culture bacterial thiosalt treatment pond. Values determined were as follows: Km and Vmax (thiosulfate), 9.83 g/liter and 243.9 mg/liter per h, respectively; Ki (lead), 3.17 mg/liter; Ki (copper), 1.27 mg/liter; Q10 between 10 and 30°C, 1.95. From these parameters, the required bioxidation pond volume and residence time could be calculated. Soluble zinc (0.2 g/liter) and particulate mill products and by-products (0.25 g/liter) were not inhibitory. Correlation with an operating thiosalt biooxidation pond showed the parameters used to be valid for thiosalt concentrations up to at least 2 g/liter, lead concentrations of at least 10 mg/liter, and temperatures of >2°C. PMID:16346885
Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters
NASA Astrophysics Data System (ADS)
Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.
2018-06-01
Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
NASA Astrophysics Data System (ADS)
Trusiak, M.; Patorski, K.; Tkaczyk, T.
2014-12-01
We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).
The 57Fe Mössbauer parameters of pyrite and marcasite with different provenances
Evans, B.J.; Johnson, R.G.; Senftle, F.E.; Cecil, C.B.; Dulong, F.
1982-01-01
The Mössbauer parameters of pyrite and marcasite exhibit appreciable variations, which bear no simple relationship to the geological environment in which they occur but appear to be selectively influenced by impurities, especially arsenic, in the pyrite lattice. Quantitative and qualitative determinations of pyrite/marcasite mechanical mixtures are straightforward at 298 K and 77 K but do require least-squares computer fittings and are limited to accuracies ranging from ±5 to ±15 per cent by uncertainties in the parameter values of the pure phases. The methodology and results of this investigation are directly applicable to coals for which the presence and relative amounts of pyrite and marcasite could be of considerable genetic significance.
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
A note on implementation of decaying product correlation structures for quasi-least squares.
Shults, Justine; Guerra, Matthew W
2014-08-30
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Chao-Ying; Liu, Shi-Fei; Fu, Jin-Xian
2015-11-01
High-order perturbation formulas for a 3d9 ion in rhombically elongated octahedral was applied to calculate the electron paramagnetic resonance (EPR) parameters (the g factors, gi, and the hyperfine structure constants Ai, i = x, y, z) of the rhombic Cu2+ center in CoNH4PO4.6H2O. In the calculations, the required crystal-field parameters are estimated from the superposition model which enables correlation of the crystal-field parameters and hence the EPR parameters with the local structure of the rhombic Cu2+ center. Based on the calculations, the ligand octahedral (i.e. [Cu(H2O)6]2+ cluster) are found to experience the local bond length variations ΔZ (≈0.213 Å) and δr (≈0.132 Å) along axial and perpendicular directions due to the Jahn-Teller effect. Theoretical EPR parameters based on the above local structure are in good agreement with the observed values; the results are discussed.
Streeter, Ian; Cheema, Umber
2011-10-07
Understanding the basal O(2) and nutrient requirements of cells is paramount when culturing cells in 3D tissue models. Any scaffold design will need to take such parameters into consideration, especially as the addition of cells introduces gradients of consumption of such molecules from the surface to the core of scaffolds. We have cultured two cell types in 3D native collagen type I scaffolds, and measured the O(2) tension at specific locations within the scaffold. By changing the density of cells, we have established O(2) consumption gradients within these scaffolds and using mathematical modeling have derived rates of consumption for O(2). For human dermal fibroblasts the average rate constant was 1.19 × 10(-17) mol cell(-1) s(-1), and for human bone marrow derived stromal cells the average rate constant was 7.91 × 10(-18) mol cell(-1) s(-1). These values are lower than previously published rates for similar cells cultured in 2D, but the values established in this current study are more representative of rates of consumption measured in vivo. These values will dictate 3D culture parameters, including maximum cell-seeding density and maximum size of the constructs, for long-term viability of tissue models.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
Technical aspects of contrast-enhanced ultrasound (CEUS) examinations: tips and tricks.
Greis, C
2014-01-01
Ultrasound contrast agents have substantially extended the clinical value of ultrasound, allowing the assessment of blood flow and distribution in real-time down to microcapillary level. Selective imaging of contrast agent signals requires a contrast-specific imaging mode on the ultrasound scanner, allowing real-time separation of tissue and contrast agent signals. The creation of a contrast image requires a specific interaction between the insonated ultrasound wave and the contrast agent microbubbles, leading to persistent oscillation of the bubbles. Several technical and procedural parameters have a significant influence on the quality of CEUS images and should be controlled carefully to obtain good image quality and a reliable diagnosis. Achieving the proper balance between the respective parameters is a matter of technical knowledge and experience. Appropriate training and education should be mandatory for every investigator performing CEUS examinations.
Optical Riblet Sensor: Beam Parameter Requirements for the Probing Laser Source.
Tschentscher, Juliane; Hochheim, Sven; Brüning, Hauke; Brune, Kai; Voit, Kay-Michael; Imlau, Mirco
2016-03-30
Beam parameters of a probing laser source in an optical riblet sensor are studied by considering the high demands on a sensors' precision and reliability for the determination of deviations of the geometrical shape of a riblet. Mandatory requirements, such as minimum intensity and light polarization, are obtained by means of detailed inspection of the optical response of the riblet using ray and wave optics; the impact of wavelength is studied. Novel measures for analyzing the riblet shape without the necessity of a measurement with a reference sample are derived; reference values for an ideal riblet structure obtained with the optical riblet sensor are given. The application of a low-cost, frequency-doubled Nd:YVO₄ laser pointer sufficient to serve as a reliable laser source in an appropriate optical riblet sensor is discussed.
Optical Riblet Sensor: Beam Parameter Requirements for the Probing Laser Source
Tschentscher, Juliane; Hochheim, Sven; Brüning, Hauke; Brune, Kai; Voit, Kay-Michael; Imlau, Mirco
2016-01-01
Beam parameters of a probing laser source in an optical riblet sensor are studied by considering the high demands on a sensors’ precision and reliability for the determination of deviations of the geometrical shape of a riblet. Mandatory requirements, such as minimum intensity and light polarization, are obtained by means of detailed inspection of the optical response of the riblet using ray and wave optics; the impact of wavelength is studied. Novel measures for analyzing the riblet shape without the necessity of a measurement with a reference sample are derived; reference values for an ideal riblet structure obtained with the optical riblet sensor are given. The application of a low-cost, frequency-doubled Nd:YVO4 laser pointer sufficient to serve as a reliable laser source in an appropriate optical riblet sensor is discussed. PMID:27043567
Research of human kidney thermal properties for the purpose of cryosurgery
NASA Astrophysics Data System (ADS)
Ponomarev, D. E.; Pushkarev, A. V.
2017-11-01
Calculation of the heat transfer is required to correctly predict the results of cryosurgery, cryopreservation, etc. One of the important initial parameters are the thermophysical properties of biological tissues. In the present study, the values of the heat capacity, cryoscopic temperature and enthalpy of the phase transition of the kidney samples in vitro were obtained by differential scanning calorimetry.
NASA Astrophysics Data System (ADS)
Russo, T. A.; Devineni, N.; Lall, U.
2015-12-01
Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.
Fundamentals of undervoltage breakdown through the Townsend mechanism
NASA Astrophysics Data System (ADS)
Cooley, James E.
The conditions under which an externally supplied pulse of electrons will induce breakdown in an undervoltaged, low-gain, DC discharge gap are experimentally and theoretically explored. The phenomenon is relevant to fundamental understanding of breakdown physics, to switching applications such as triggered spark gaps and discharge initiation in pulsed-plasma thrusters, and to gas-avalanche particle counters. A dimensionless theoretical description of the phenomenon is formulated and solved numerically. It is found that a significant fraction of the charge on the plates must be injected for breakdown to be achieved at low avalanche-ionization gain, when an electron undergoes fewer than approximately 10 ionizing collisions during one gap transit. It is also found that fewer injected electrons are required as the gain due to electron-impact ionization (alpha process) is increased, or as the sensitivity of the alpha process to electric field is enhanced by decreasing the reduced electric field (electric field divided by pressure, E/p). A predicted insensitivity to ion mobility implies that breakdown is determined during the first electron avalanche when space charge distortion is greatest. A dimensionless, theoretical study of the development of this avalanche reveals a critical value of the reduced electric field to be the value at the Paschen curve minimum divided by 1.6. Below this value, the net result of the electric field distortion is to increase ionization for subsequent avalanches, making undervoltage breakdown possible. Above this value, ionization for subsequent avalanches will be suppressed and undervoltage breakdown is not possible. Using an experimental apparatus in which ultraviolet laser pulses are directed onto a photo-emissive cathode of a parallel-plate discharge gap, it is found that undervoltage breakdown can occur through a Townsend-like mechanism through the buildup of successively larger avalanche generations. The minimum number of injected electrons required to achieve breakdown is measured in argon at pd values of 3-10 Torr-m. The required electron pulse magnitude was found to scale inversely with pressure and voltage in this parameter range. When higher-power infrared laser pulses were used to heat the cathode surface, a faster, streamer-like breakdown mechanism was occasionally observed. As an example application, an investigation into the requirements for initiating discharges in Gas-fed Pulsed Plasma Thrusters (GFPPTs) is conducted. Theoretical investigations based on order-of-magnitude characterizations of previous GFPPT designs reveal that high-conductivity arc discharges are required for critically-damped matching of circuit components, and that relatively fast streamer breakdown is preferable to minimize delay between triggering and current sheet formation. The faster breakdown mechanism observed in the experiments demonstrates that such a discharge process can occur. However, in the parameter space occupied by most thrusters, achieving the phenomenon by way of a space charge distortion caused purely by an electron pulse should not be possible. Either a transient change in the distribution of gas density, through ablation or desorption, or a thruster design that occupies a different parameter space, such as one that uses higher mass bits, higher voltages, or smaller electrode spacing, is required for undervoltage breakdown to occur.
Earth resources data acquisition sensor study
NASA Technical Reports Server (NTRS)
Grohse, E. W.
1975-01-01
The minimum data collection and data processing requirements are investigated for the development of water monitoring systems, which disregard redundant and irrelevant data and process only those data predictive of the onset of significant pollution events. Two approaches are immediately suggested: (1) adaptation of a presently available ambient air monitoring system developed by TVA, and (2) consideration of an air, water, and radiological monitoring system developed by the Georgia Tech Experiment Station. In order to apply monitoring systems, threshold values and maximum allowable rates of change of critical parameters such as dissolved oxygen and temperature are required.
Quantum Metrology Assisted by Abstention
NASA Astrophysics Data System (ADS)
Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.
2013-03-01
The main goal of quantum metrology is to obtain accurate values of physical parameters using quantum probes. In this context, we show that abstention, i.e., the possibility of getting an inconclusive answer at readout, can drastically improve the measurement precision and even lead to a change in its asymptotic behavior, from the shot-noise to the Heisenberg scaling. We focus on phase estimation and quantify the required amount of abstention for a given precision. We also develop analytical tools to obtain the asymptotic behavior of the precision and required rate of abstention for arbitrary pure states.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Design and Calibration of the X-33 Flush Airdata Sensing (FADS) System
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Cobleigh, Brent R.; Haering, Edward A.
1998-01-01
This paper presents the design of the X-33 Flush Airdata Sensing (FADS) system. The X-33 FADS uses a matrix of pressure orifices on the vehicle nose to estimate airdata parameters. The system is designed with dual-redundant measurement hardware, which produces two independent measurement paths. Airdata parameters that correspond to the measurement path with the minimum fit error are selected as the output values. This method enables a single sensor failure to occur with minimal degrading of the system performance. The paper shows the X-33 FADS architecture, derives the estimating algorithms, and demonstrates a mathematical analysis of the FADS system stability. Preliminary aerodynamic calibrations are also presented here. The calibration parameters, the position error coefficient (epsilon), and flow correction terms for the angle of attack (delta alpha), and angle of sideslip (delta beta) are derived from wind tunnel data. Statistical accuracy of' the calibration is evaluated by comparing the wind tunnel reference conditions to the airdata parameters estimated. This comparison is accomplished by applying the calibrated FADS algorithm to the sensed wind tunnel pressures. When the resulting accuracy estimates are compared to accuracy requirements for the X-33 airdata, the FADS system meets these requirements.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
A mathematical model of electrolyte and fluid transport across corneal endothelium.
Fischbarg, J; Diecke, F P J
2005-01-01
To predict the behavior of a transporting epithelium by intuitive means can be complex and frustrating. As the number of parameters to be considered increases beyond a few, the task can be termed impossible. The alternative is to model epithelial behavior by mathematical means. For that to be feasible, it has been presumed that a large amount of experimental information is required, so as to be able to use known values for the majority of kinetic parameters. However, in the present case, we are modeling corneal endothelial behavior beginning with experimental values for only five of eleven parameters. The remaining parameter values are calculated assuming cellular steady state and using algebraic software. With that as base, as in preceding treatments but with a distribution of channels/transporters suited to the endothelium, temporal cell and tissue behavior are computed by a program written in Basic that monitors changes in chemical and electrical driving forces across cell membranes and the paracellular pathway. We find that the program reproduces quite well the behaviors experimentally observed for the translayer electrical potential difference and rate of fluid transport, (a) in the steady state, (b) after perturbations by changes in ambient conditions HCO3-, Na+, and Cl- concentrations), and (c) after challenge by inhibitors (ouabain, DIDS, Na+- and Cl(-)-channel inhibitors). In addition, we have used the program to compare predictions of translayer fluid transport by two competing theories, electro-osmosis and local osmosis. Only predictions using electro-osmosis fit all the experimental data.
VizieR Online Data Catalog: Opacities from the Opacity Project (Seaton+, 1995)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
1997-08-01
1 CODES. ***** 1.1 Code rop.for ************ This code reads opacity files written in standard OP format. Its main purpose is to provide documentation on the contents of the files. This code, like the other codes provided, prompts for the name of the file (or files) to be read. The file names read in response to the prompt may have up to 128 characters. 1.2 Code opfit.for ************** This code reads opacity files in standard OP format, and provides for interpolation of opacities to any required values of temperature and mass-density. The method used is described in OPF. The code prompts for the name of a file giving all required control parameters. As an example, the file opfit.dat is provided (users will need to change directory names and file names). The use of opfit.for is illustrated using opfit.dat. Most users will probably want to adapt opfit.for for use as a subroutine in other codes. Timings for DEC 7000 ALPHA: 0.3 sec for data read and initialisations; then 0.0007 sec for each temperature-density point. Users who like OPAL formats should note that opfit.for has a facility to produce files of OP data in OPAL-type formats. 1.3 Code ixz.for ************ This code provides for interpolations to any required values of X and Z. See IXZ. It prompts for the name of a file giving all required control parameters. An example of such a file if provided, ixz.dat (the user will need to change directory and file names). The output files have names s92INT.'nnn'. The user specifies the first value of nnn, and the number of files to be produced. 2. DATA FILES ********** 2.1 Data files for solar metal-mix ****************************** Data for solar metal-mix s92 as defined in SYMP. These files are from version 2 runs of December 1994 (see IXZ for details on Version 2). There are 213 files with names s92.'nnn', 'nnn'=201 to 413. Each file occupies 83762 bytes. The file s92.version2 gives values of X (hydrogen mass-faction) and Z (metals mass-fraction) for each value of 'nnn'. The user can get s92.version2, select the values of 'nnn' required, then get the required files s92.'nnn'. The user can see the file in ftp, displayed on the screen, by typing "get s92.version2 -". The files s92.'nnn' can be used with opfit.for to obtain opacities for any requires value of temperature and mass density. Files for other metal-mixtures will be added in due course. Send requests to mjs@star.ucl.ac.uk. 2.2 Files for interpolation in X and Z ********************************** The data files have names s92xz.'mmm', where 'mmm'=001 to 096. They differ from the standard OP files (such as s92.'nnn' --- section 2.1 above) in that they contain information giving derivatives of opacities with respect to X and Z. Each file s92xz.'mmm' occupies 148241 bytes. The interpolations to any required values of X and Z are made using ixz.for. Timings: on DEC 7000 ALPHA, 2.16 sec for each new-mixture file. For interpolations to some specified values of X and Z, one requires just 4 files s92xz.'mmm'. Most users will not require the complete set of files s92xz.'mmm'. The file s92xz.index includes a table (starting on line 3) giving values, for each 'mmm' file, of x,y,z (abundances by number-factions) and X,Y,Z (abundances by mass-fractions). Users are advised to get the file s92.index, and select values of 'mmm' for files required, then get those files. The files produced by ixz.for are in standard OP format and can be used with opfit.for to obtain opacities for any required values of temperature and mass density. 3 RECOMMENDED PROCEDURE FOR USE OF OPACITY FILES ********************************************** (1) Get the file s92.version2. (2) If the values of X and Z you require are available in the files s92.'nnn' then get those files. (3) If not, get the file s92xz.index. (4) Select from s92xz.index the values of 'mmm' which cover the range of X and Z in which your are interested. Get those files and use ixz.for to generate files for your exact required values of X and Z. (5) Note that the exact abundance mixtures used are specified in each file (see rop.for). Also each run of opfit.for produces a table of abundances. (6) If you want a metal-mix different from that of s92, contact mjs@star.ucl.ac.uk. 4 FUTURE DEVELOPMENTS ******************* (1) Data for the calculation of radiative forces are provided as the CDS catalog
Analysis and design of a genetic circuit for dynamic metabolic engineering.
Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan
2013-08-16
Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.
NASA Astrophysics Data System (ADS)
Deepak, Doreswamy; Beedu, Rajendra
2017-08-01
Al-6061 is one among the most useful material used in manufacturing of products. The major qualities of Aluminium are reasonably good strength, corrosion resistance and thermal conductivity. These qualities have made it a suitable material for various applications. While manufacturing these products, companies strive for reducing the production cost by increasing Material Removal Rate (MRR). Meanwhile, the quality of surface need to be ensured at an acceptable value. This paper aims at bringing a compromise between high MRR and low surface roughness requirement by applying Grey Relational Analysis (GRA). This article presents the selection of controllable parameters like longitudinal feed, cutting speed and depth of cut to arrive at optimum values of MRR and surface roughness (Ra). The process parameters for experiments were selected based on Taguchi’s L9 array with two replications. Grey relation analysis being most suited method for multi response optimization, the same is adopted for the optimization. The result shows that feed rate is the most significant factor that influences MRR and Surface finish.
Modeling multilayer x-ray reflectivity using genetic algorithms
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.
2000-06-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.
Nonlinear system analysis in bipolar integrated circuits
NASA Astrophysics Data System (ADS)
Fang, T. F.; Whalen, J. J.
1980-01-01
Since analog bipolar integrated circuits (IC's) have become important components in modern communication systems, the study of the Radio Frequency Interference (RFI) effects in bipolar IC amplifiers is an important subject for electromagnetic compatibility (EMC) engineering. The investigation has focused on using the nonlinear circuit analysis program (NCAP) to predict RF demodulation effects in broadband bipolar IC amplifiers. The audio frequency (AF) voltage at the IC amplifier output terminal caused by an amplitude modulated (AM) RF signal at the IC amplifier input terminal was calculated and compared to measured values. Two broadband IC amplifiers were investigated: (1) a cascode circuit using a CA3026 dual differential pair; (2) a unity gain voltage follower circuit using a micro A741 operational amplifier (op amp). Before using NCAP for RFI analysis, the model parameters for each bipolar junction transistor (BJT) in the integrated circuit were determined. Probe measurement techniques, manufacturer's data, and other researcher's data were used to obtain the required NCAP BJT model parameter values. An important contribution included in this effort is a complete set of NCAP BJT model parameters for most of the transistor types used in linear IC's.
Bayesian networks in overlay recipe optimization
NASA Astrophysics Data System (ADS)
Binns, Lewis A.; Reynolds, Greg; Rigden, Timothy C.; Watkins, Stephen; Soroka, Andrew
2005-05-01
Currently, overlay measurements are characterized by "recipe", which defines both physical parameters such as focus, illumination et cetera, and also the software parameters such as algorithm to be used and regions of interest. Setting up these recipes requires both engineering time and wafer availability on an overlay tool, so reducing these requirements will result in higher tool productivity. One of the significant challenges to automating this process is that the parameters are highly and complexly correlated. At the same time, a high level of traceability and transparency is required in the recipe creation process, so a technique that maintains its decisions in terms of well defined physical parameters is desirable. Running time should be short, given the system (automatic recipe creation) is being implemented to reduce overheads. Finally, a failure of the system to determine acceptable parameters should be obvious, so a certainty metric is also desirable. The complex, nonlinear interactions make solution by an expert system difficult at best, especially in the verification of the resulting decision network. The transparency requirements tend to preclude classical neural networks and similar techniques. Genetic algorithms and other "global minimization" techniques require too much computational power (given system footprint and cost requirements). A Bayesian network, however, provides a solution to these requirements. Such a network, with appropriate priors, can be used during recipe creation / optimization not just to select a good set of parameters, but also to guide the direction of search, by evaluating the network state while only incomplete information is available. As a Bayesian network maintains an estimate of the probability distribution of nodal values, a maximum-entropy approach can be utilized to obtain a working recipe in a minimum or near-minimum number of steps. In this paper we discuss the potential use of a Bayesian network in such a capacity, reducing the amount of engineering intervention. We discuss the benefits of this approach, especially improved repeatability and traceability of the learning process, and quantification of uncertainty in decisions made. We also consider the problems associated with this approach, especially in detailed construction of network topology, validation of the Bayesian network and the recipes it generates, and issues arising from the integration of a Bayesian network with a complex multithreaded application; these primarily relate to maintaining Bayesian network and system architecture integrity.
“Stringy” coherent states inspired by generalized uncertainty principle
NASA Astrophysics Data System (ADS)
Ghosh, Subir; Roy, Pinaki
2012-05-01
Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.
Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.
Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin
2016-01-21
An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ± 40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design.
Hachiya, Mizuki; Murata, Shin; Otao, Hiroshi; Ihara, Takehiko; Mizota, Katsuhiko; Asami, Toyoko
2015-01-01
[Purpose] This study aimed to verify the usefulness of a 50-m round walking test developed as an assessment method for walking ability in the elderly. [Subjects] The subjects were 166 elderly requiring long-term care individuals (mean age, 80.5 years). [Methods] In order to evaluate the factors that had affected falls in the subjects in the previous year, we performed the 50-m round walking test, functional reach test, one-leg standing test, and 5-m walking test and measured grip strength and quadriceps strength. [Results] The 50-m round walking test was selected as a variable indicating fall risk based on the results of multiple logistic regression analysis. The cutoff value of the 50-m round walking test for determining fall risk was 0.66 m/sec. The area under the receiver operating characteristic curve was 0.64. The sensitivity of the cutoff value was 65.7%, the specificity was 63.6%, the positive predictive value was 55.0%, the negative predictive value was 73.3%, and the accuracy was 64.5%. [Conclusion] These results suggest that the 50-m round walking test is a potentially useful parameter for the determination of fall risk in the elderly requiring long-term care. PMID:26834327
Hachiya, Mizuki; Murata, Shin; Otao, Hiroshi; Ihara, Takehiko; Mizota, Katsuhiko; Asami, Toyoko
2015-12-01
[Purpose] This study aimed to verify the usefulness of a 50-m round walking test developed as an assessment method for walking ability in the elderly. [Subjects] The subjects were 166 elderly requiring long-term care individuals (mean age, 80.5 years). [Methods] In order to evaluate the factors that had affected falls in the subjects in the previous year, we performed the 50-m round walking test, functional reach test, one-leg standing test, and 5-m walking test and measured grip strength and quadriceps strength. [Results] The 50-m round walking test was selected as a variable indicating fall risk based on the results of multiple logistic regression analysis. The cutoff value of the 50-m round walking test for determining fall risk was 0.66 m/sec. The area under the receiver operating characteristic curve was 0.64. The sensitivity of the cutoff value was 65.7%, the specificity was 63.6%, the positive predictive value was 55.0%, the negative predictive value was 73.3%, and the accuracy was 64.5%. [Conclusion] These results suggest that the 50-m round walking test is a potentially useful parameter for the determination of fall risk in the elderly requiring long-term care.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Overcoming equifinality: Leveraging long time series for stream metabolism estimation
Appling, Alison; Hall, Robert O.; Yackulic, Charles B.; Arroita, Maite
2018-01-01
The foundational ecosystem processes of gross primary production (GPP) and ecosystem respiration (ER) cannot be measured directly but can be modeled in aquatic ecosystems from subdaily patterns of oxygen (O2) concentrations. Because rivers and streams constantly exchange O2 with the atmosphere, models must either use empirical estimates of the gas exchange rate coefficient (K600) or solve for all three parameters (GPP, ER, and K600) simultaneously. Empirical measurements of K600 require substantial field work and can still be inaccurate. Three-parameter models have suffered from equifinality, where good fits to O2 data are achieved by many different parameter values, some unrealistic. We developed a new three-parameter, multiday model that ensures similar values for K600 among days with similar physical conditions (e.g., discharge). Our new model overcomes the equifinality problem by (1) flexibly relating K600 to discharge while permitting moderate daily deviations and (2) avoiding the oft-violated assumption that residuals in O2 predictions are uncorrelated. We implemented this hierarchical state-space model and several competitor models in an open-source R package, streamMetabolizer. We then tested the models against both simulated and field data. Our new model reduces error by as much as 70% in daily estimates of K600, GPP, and ER. Further, accuracy benefits of multiday data sets require as few as 3 days of data. This approach facilitates more accurate metabolism estimates for more streams and days, enabling researchers to better quantify carbon fluxes, compare streams by their metabolic regimes, and investigate controls on aquatic activity.
NASA Astrophysics Data System (ADS)
Mondal, Santanu; Chakrabarti, Sandip K.; Nagarkoti, Shreeram; Arévalo, Patricia
2017-11-01
In a two component advective flow around a compact object, a high-viscosity Keplerian disk is flanked by a low angular momentum and low-viscosity flow that forms a centrifugal, pressure-supported shock wave close to the black hole. The post-shock region that behaves like a Compton cloud becomes progressively smaller during the outburst as the spectra change from the hard state (HS) to the soft state (SS), in order to satisfy the Rankine-Hugoniot relation in the presence of cooling. The resonance oscillation of the shock wave that causes low-frequency quasi-periodic oscillations (QPOs) also allows us to obtain the shock location from each observed QPO frequency. Applying the theory of transonic flow, along with Compton cooling and viscosity, we obtain the viscosity parameter {α }{SK} required for the shock to form at those places in the low-Keplerian component. When we compare the evolution of {α }{SK} for each outburst, we arrive at a major conclusion: in each source, the advective flow component typically requires an exactly similar value of {α }{SK} when transiting from one spectral state to another (e.g., from HS to SS through intermediate states and the other way around in the declining phase). Most importantly, these {α }{SK} values in the low angular momentum advective component are fully self-consistent in the sense that they remain below the critical value {α }{cr} required to form a Keplerian disk. For a further consistency check, we compute the {α }{{K}} of the Keplerian component, and find that in each of the objects, {α }{SK} < {α }{cr} < {α }{{K}}.
Value-Based Caching in Information-Centric Wireless Body Area Networks
Al-Turjman, Fadi M.; Imran, Muhammad; Vasilakos, Athanasios V.
2017-01-01
We propose a resilient cache replacement approach based on a Value of sensed Information (VoI) policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation) and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs). These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures). These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures. PMID:28106817
Spatial interpolation of monthly mean air temperature data for Latvia
NASA Astrophysics Data System (ADS)
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
Evaluation of weather-based rice yield models in India
NASA Astrophysics Data System (ADS)
Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.
2013-01-01
The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Advective transport in heterogeneous aquifers: Are proxy models predictive?
NASA Astrophysics Data System (ADS)
Fiori, A.; Zarlenga, A.; Gotovac, H.; Jankovic, I.; Volpi, E.; Cvetkovic, V.; Dagan, G.
2015-12-01
We examine the prediction capability of two approximate models (Multi-Rate Mass Transfer (MRMT) and Continuous Time Random Walk (CTRW)) of non-Fickian transport, by comparison with accurate 2-D and 3-D numerical simulations. Both nonlocal in time approaches circumvent the need to solve the flow and transport equations by using proxy models to advection, providing the breakthrough curves (BTC) at control planes at any x, depending on a vector of five unknown parameters. Although underlain by different mechanisms, the two models have an identical structure in the Laplace Transform domain and have the Markovian property of independent transitions. We show that also the numerical BTCs enjoy the Markovian property. Following the procedure recommended in the literature, along a practitioner perspective, we first calibrate the parameters values by a best fit with the numerical BTC at a control plane at x1, close to the injection plane, and subsequently use it for prediction at further control planes for a few values of σY2≤8. Due to a similar structure and Markovian property, the two methods perform equally well in matching the numerical BTC. The identified parameters are generally not unique, making their identification somewhat arbitrary. The inverse Gaussian model and the recently developed Multi-Indicator Model (MIM), which does not require any fitting as it relates the BTC to the permeability structure, are also discussed. The application of the proxy models for prediction requires carrying out transport field tests of large plumes for a long duration.
A high deuterium abundance at redshift z = 0.7.
Webb, J K; Carswell, R F; Lanzetta, K M; Ferlet, R; Lemoine, M; Vidal-Madjar, A; Bowen, D V
1997-07-17
Of the light elements, the primordial abundance of deuterium relative to hydrogen, (D/H)p, provides the most sensitive diagnostic for the cosmological mass density parameter, omegaB. Recent high-redshift D/H measurements are highly discrepant, although this may reflect observational uncertainties. The larger primordial D/H values imply a low omegaB (requiring the Universe to be dominated by non-baryonic matter), and cause problems for galactic chemical evolution models, which have difficulty in reproducing the steep decline in D/H to the present-day values. Conversely, the lower D/H values measured at high redshift imply an omegaB greater than that derived from 7Li and 4He abundance measurements, and may require a deuterium-abundance evolution that is too low to easily explain. Here we report the first measurement of D/H at intermediate redshift (z = 0.7010), in a gas cloud selected to minimize observational uncertainties. Our analysis yields a value of D/H ((2.0 +/- 0.5) x 10[-4]) which is at the upper end of the range of values measured at high redshifts. This finding, together with other independent observations, suggests that there may be inhomogeneity in (D/H)p of at least a factor of ten.
An improved method to estimate reflectance parameters for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro
2008-01-01
Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.
2013-01-01
Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Gao, Peiyuan
Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less
Adjoint-Based Climate Model Tuning: Application to the Planet Simulator
NASA Astrophysics Data System (ADS)
Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef
2018-01-01
The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
A simple respirogram-based approach for the management of effluent from an activated sludge system.
Li, Zhi-Hua; Zhu, Yuan-Mo; Yang, Cheng-Jian; Zhang, Tian-Yu; Yu, Han-Qing
2018-08-01
Managing wastewater treatment plant (WWTP) based on respirometric analysis is a new and promising field. In this study, a multi-dimensional respirogram space was constructed, and an important index R es/t (ratio of in-situ respiration rate to maximum respiration rate) was derived as an alarm signal for the effluent quality control. A smaller R es/t value suggests better effluent. The critical R' es/t value used for determining whether the effluent meets the regulation depends on operational conditions, which were characterized by temperature and biomass ratio of heterotrophs to autotrophs. With given operational conditions, the critical R' es/t value can be calculated from the respirogram space and effluent conditions required by the discharge regulation, with no requirement for calibration of parameters or any additional measurements. Since it is simple, easy to use, and can be readily implemented online, this approach holds a great promise for applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Review of probabilistic analysis of dynamic response of systems with random parameters
NASA Technical Reports Server (NTRS)
Kozin, F.; Klosner, J. M.
1989-01-01
The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.
Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar
2017-06-01
This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmad, Bashir; Ashiq, Muhammad Naeem; Mumtaz, Saleem; Ali, Irshad; Najam-Ul-Haq, Muhmmad; Sadiq, Imran
2018-04-01
This article reports the fabrication of Ni-Ti doped derivatives of Sr2Co2Fe12-2xO22 by economical Sol-gel method. At room temperature X-ray diffraction (XRD) pattern of powder was obtained after sintering at 1050 °C. The XRD analysis revealed the formation of pure Sr-Y hexaferrite phase. It was found that the observed values of dielectric parameters decreased with increasing Ni-Ti substitution. The higher values of dielectric constants and dielectric loss factor at lower frequency were owing to surface charge polarization. In all the samples the resonance peaks were also observed. The observed room temperature DC electrical resistivity found to increase from 1.8x106 to 4.9x109 ohm cm. The observed activation energies values of the fabricated materials are found in 0.52-0.82 eV range. The decrease in dielectric parameters and increase in resistivity of the fabricated samples with substituents suggest these materials have worth application in micro-wave devices as such devices required highly resistive materials.
Vermicomposting of food waste: assessing the stability and maturity
2012-01-01
The vermicompost using earthworms (Eisenia Fetida) was produced from food waste and chemical parameters (EC, pH, carbon to nitrogen contents (C/N)) and germination bioassay was examined in order to assess the stability and maturity indicators during the vermicomposting process. The seed used in the germination bioassay was cress. The ranges of EC, pH, C/N and germination index were 7.5-4.9 mS/cm, 5.6-7.53, 30.13-14.32% and 12.8-58.4%, respectively. The germination index (GI) value revealed that vermicompost rendered as moderate phytotoxic to cress seed. Pearson correlation coefficient was used to evaluate the relationship between the parameters. High statistically significant correlation coefficient was calculated between the GI value and EC in the vermicompost at the 99% confidence level. The C/N value showed that the vermicompost was stable. As a result of these observations, stability test alone, was not able to ensure high vermicompost quality. Therefore, it appears that determining vermicompost quality requires a simultaneous use of maturity and stability tests. PMID:23369642
Oxidative stress parameters in localized scleroderma patients.
Kilinc, F; Sener, S; Akbaş, A; Metin, A; Kirbaş, S; Neselioglu, S; Erel, O
2016-11-01
Localized scleroderma (LS) (morphea) is a chronic, inflammatory skin disease with unknown cause that progresses with sclerosis in the skin and/or subcutaneous tissues. Its pathogenesis is not completely understood. Oxidative stress is suggested to have a role in the pathogenesis of localized scleroderma. We have aimed to determine the relationship of morphea lesions with oxidative stress. The total oxidant capacity (TOC), total antioxidant capacity (TAC), paroxonase (PON) and arylesterase (ARES) activity parameters of PON 1 enzyme levels in the serum were investigated in 13 LS patients (generalized and plaque type) and 13 healthy controls. TOC values of the patient group were found higher than the TOC values of the control group (p < 0.01). ARES values of the patient group was found to be higher than the control group (p < 0.0001). OSI was significantly higher in the patient group when compared to the control (p < 0.005). Oxidative stress seems to be effective in the pathogenesis. ARES levels have increased in morphea patients regarding to the oxidative stress and its reduction. Further controlled studies are required in wider series.
A function approximation approach to anomaly detection in propulsion system test data
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Hoyt, W. A.
1993-01-01
Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.
Miri-Dashe, Timzing; Osawe, Sophia; Tokdung, Monday; Daniel, Monday Tokdung Nenbammun; Daniel, Nenbammun; Choji, Rahila Pam; Mamman, Ille; Deme, Kurt; Damulak, Dapus; Abimiku, Alash'le
2014-01-01
Interpretation of laboratory test results with appropriate diagnostic accuracy requires reference or cutoff values. This study is a comprehensive determination of reference values for hematology and clinical chemistry in apparently healthy voluntary non-remunerated blood donors and pregnant women. Consented clients were clinically screened and counseled before testing for HIV, Hepatitis B, Hepatitis C and Syphilis. Standard national blood donors' questionnaire was administered to consented blood donors. Blood from qualified volunteers was used for measurement of complete hematology and chemistry parameters. Blood samples were analyzed from a total of 383 participants, 124 (32.4%) males, 125 (32.6%) non-pregnant females and 134 pregnant females (35.2%) with a mean age of 31 years. Our results showed that the red blood cells count (RBC), Hemoglobin (HB) and Hematocrit (HCT) had significant gender difference (p = 0.000) but not for total white blood count (p>0.05) which was only significantly higher in pregnant verses non-pregnant women (p = 0.000). Hemoglobin and Hematocrit values were lower in pregnancy (P = 0.000). Platelets were significantly higher in females than men (p = 0.001) but lower in pregnant women (p = .001) with marked difference in gestational period. For clinical chemistry parameters, there was no significant difference for sodium, potassium and chloride (p>0.05) but gender difference exists for Bicarbonate (HCO3), Urea nitrogen, Creatinine as well as the lipids (p<0.05). Total bilirubin was significantly higher in males than females (p = 0.000). Significant differences exist for all chemistry parameters between pregnant and non-pregnant women in this study (p<0.05), except Amylase and total cholesterol (p>0.05). Hematological and Clinical Chemistry reference ranges established in this study showed significant gender differences. Pregnant women also differed from non-pregnant females and during pregnancy. This is the first of such comprehensive study to establish reference values among adult Nigerians and difference observed underscore the need to establish reference values for different populations.
A uniform Tauberian theorem in dynamic games
NASA Astrophysics Data System (ADS)
Khlopin, D. V.
2018-01-01
Antagonistic dynamic games including games represented in normal form are considered. The asymptotic behaviour of value in these games is investigated as the game horizon tends to infinity (Cesàro mean) and as the discounting parameter tends to zero (Abel mean). The corresponding Abelian-Tauberian theorem is established: it is demonstrated that in both families the game value uniformly converges to the same limit, provided that at least one of the limits exists. Analogues of one-sided Tauberian theorems are obtained. An example shows that the requirements are essential even for control problems. Bibliography: 31 titles.
2017-01-01
Modeling of microbial inactivation by high hydrostatic pressure (HHP) requires a plot of the log microbial count or survival ratio versus time data under a constant pressure and temperature. However, at low pressure and temperature values, very long holding times are needed to obtain measurable inactivation. Since the time has a significant effect on the cost of HHP processing it may be reasonable to fix the time at an appropriate value and quantify the inactivation with respect to pressure. Such a plot is called dose-response curve and it may be more beneficial than the traditional inactivation modeling since short holding times with different pressure values can be selected and used for the modeling of HHP inactivation. For this purpose, 49 dose-response curves (with at least 4 log10 reduction and ≥5 data points including the atmospheric pressure value (P = 0.1 MPa), and with holding time ≤10 min) for HHP inactivation of microorganisms obtained from published studies were fitted with four different models, namely the Discrete model, Shoulder model, Fermi equation, and Weibull model, and the pressure value needed for 5 log10 (P5) inactivation was calculated for all the models above. The Shoulder model and Fermi equation produced exactly the same parameter and P5 values, while the Discrete model produced similar or sometimes the exact same parameter values as the Fermi equation. The Weibull model produced the worst fit (had the lowest adjusted determination coefficient (R2adj) and highest mean square error (MSE) values), while the Fermi equation had the best fit (the highest R2adj and lowest MSE values). Parameters of the models and also P5 values of each model can be useful for the further experimental design of HHP processing and also for the comparison of the pressure resistance of different microorganisms. Further experiments can be done to verify the P5 values at given conditions. The procedure given in this study can also be extended for enzyme inactivation by HHP. PMID:28880255
Scale Control and Quality Management of Printed Image Parameters
NASA Astrophysics Data System (ADS)
Novoselskaya, O. A.; Kolesnikov, V. L.; Solov'eva, T. V.; Nagornova, I. V.; Babluyk, E. B.; Trapeznikova, O. V.
2017-06-01
The article provides a comparison of the main valuation techniques for a regulated parameter of printability of the offset paper by current standards GOST 24356 and ISO 3783: 2006. The results of development and implementation of a complex test scale for management and control the quality of printed production are represented. The estimation scale is introduced. It includes normalized parameters of print optical density, print uniformity, picking out speed, the value of dot gain, print contrast with the added criteria of minimizing microtexts, a paper slip, resolution threshold and effusing ability of paper surface. The results of analysis allow directionally form surface properties of the substrate to facilitate achieving the required quality of the printed image parameters, i. e. optical density of a print at a predetermined level not less than 1.3, the print uniformity with minimal deviation of dot gain about the order of 10 per cents.
Gutierrez-Villalobos, Jose M.; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A.; Martínez-Hernández, Moisés A.
2015-01-01
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor. PMID:26131677
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
Gutierrez-Villalobos, Jose M; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A; Martínez-Hernández, Moisés A
2015-06-29
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor.
The DPAC Compensation Model: An Introductory Handbook.
1987-04-01
introductory and advanced economics courses at the US Air Force Academy, he served for four years as an analyst and action officer in the ...introduces new users to the ACOL framework and provides some guidelines for choosing reasonable values for the four long-run parameters required to run the ...regression coefficients for ACOL and the civilian unemployment rate; for pilots, the number of " new " pilot
Robust stability of linear systems: Some computational considerations
NASA Technical Reports Server (NTRS)
Laub, A. J.
1979-01-01
The cases of both additive and multiplicative perturbations were discussed and a number of relationships between the two cases were given. A number of computational aspects of the theory were also discussed, including a proposed new method for evaluating general transfer or frequency response matrices. The new method is numerically stable and efficient, requiring only operations to update for new values of the frequency parameter.
Inexpensive automated paging system for use at remote research sites
Sargent, S.L.; Dey, W.S.; Keefer, D.A.
1998-01-01
The use of a flow-activated automatic sampler at a remote research site required personnel to periodically visit the site to collect samples and reset the automatic sampler. To reduce site visits, a cellular telephone was modified for activation by a datalogger. The purpose of this study was to demonstrate the use and benefit of the modified telephone. Both the power switch and the speed-dial button on the telephone were bypassed and wired to a relay driver. The datalogger was programmed to compare values of a monitored environmental parameter with a target value. When the target value was reached or exceeded, the datalogger pulsed a relay driver, activating power to the telephone. A separate relay activated the speed dial, dialing the number of a tone-only pager. The use of this system has saved time and reduced travel costs by reducing the number of trips to the site, without the loss of any data.The use of a flow-activated automatic sampler at a remote research site required personnel to periodically visit the site to collect samples and reset the automatic sampler. To reduce site visits, a cellular telephone was modified for activation by a datalogger. The purpose of this study was to demonstrate the use and benefit of the modified telephone. Both the power switch and the speed-dial button on the telephone were bypassed and wired to a relay driver. The datalogger was programmed to compare values of a monitored environmental parameter with a target value. When the target value was reached or exceeded, the datalogger pulsed a relay driver, activating power to the telephone. A separate relay activated the speed dial, dialing the number of a tone-only pager. The use of this system has saved time and reduced travel costs by reducing the number of trips to the site, without the loss of any data.
Kuang, Yun; Zhang, Ran-Ran; Pei, Qi; Tan, Hong-Yi; Guo, Cheng-Xian; Huang, Jie; Xiang, Yu-Xia; Ouyang, Wen; Duan, Kai-Ming; Wang, Sai-Ying; Yang, Guo-Ping
2015-12-01
The application of dexmedetomidine in patient sedation is generally accepted, though its clinical application is limited because of the lack of information detailing the specific properties among diverse populations of patients. The aim of this study was to compare the pharmacokinetic and pharmacodynamic characteristics of dexmedetomidine between elderly and young patients during spinal anesthesia. 34 subjects (elderly group: n = 15; young group: n = 19) with spinal anesthesia were enrolled in the present study following the inclusion/exclusion criteria detailed below. All subjects received intravenous infusion of dexmedetomidine with a loading dose of 0.5 µg x kg⁻¹ for 10 minutes and a maintenance dose of 0.5 µg x kg⁻¹ x h⁻¹ for 50 minutes. Plasma concentrations of dexmedetomidine were detected by the HPLC-MS/MS method and pharmacokinetic parameters were calculated using WinNolin software. There was no significant difference between the elderly and young subjects in major pharmacokinetic parameters. There was a marked gender difference in the Cmax (peak plasma concentration) and tmax (time to reach Cmax) between genders in elderly subjects, though in this cohort the other pharmacokinetic parameters were not significantly different. In the young subjects there were no noteworthy variations between genders in pharmacokinetic parameters. There was no significant difference between the two groups in BISAUC(0-t) (the area under the bispectral index-time curve from time 0 to t hours), BISmin (the minimum value of the bispectral index after drug delivery), and or tmin-BIS (bispectral index for the minimum value of time). SBP (systolic blood pressure), DBP (diastolic blood pressure), HR (heart rate), and SpO₂(pulse oxygen saturation) developed substantive differences in a time-dependent manner, but there were no statistically significant differences in these four indicators in the time*group at three time points (1 hour, 2 hours, and 3 hours after drug administration); while SBP was significantly different between the groups, this differential declined in a time-dependent manner, and there were no significant attendant differences in the D-value. The observed values and D-values of DBP and HR were similar in the groups, but the observed value and D-value of SpO₂did differ. There were 14 drug-related adverse events in the young group, and 26 drug-related adverse events in the elderly group, a 46% differential. The percentage of patients who requiring intervention during surgery was 68.75% (11/16) in the elderly group and 36.84% (7/19) in the young group, with no significant difference between the two groups once age was factored in (p = 0.06). None of the pharmacodynamic indices, however, correlated with the key pharmacokinetic parameters (Cmax, AUC(0→t), AUC(0→∞)) of dexmedetomidine. The clearance of dexmedetomidine in elderly patients showed a declining trend compared to young patients. Interventions in the elderly group were more frequent than in the young group, and the elderly group showed significant adverse effects. It is suggested that elderly patients who use dexmedetomidine may benefit from a different dose. However, further research with a larger population size is required to confirm these findings.
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
[NUTRITIONAL STATUS BY ANTHROPOMETRIC AND BIOCHEMICAL PARAMETERS OF COLLEGE BASKETBALL PLAYERS].
Godoy-Cumillaf, Andrés Esteban Roberto; Cárcamo-Araneda, Cristian Rodolfo; Hermosilla-Rodríguez, Freddy Patricio; Oyarzún-Ruiz, Jean Pierre; Viveros-Herrera, José Francisco Javier
2015-12-01
in relation to the student population, their class schedules, hours of study, budget shortages, among others, do not allow them to have good eating habits and sedentary ago. Within this context are the sports teams, which must deal with the above. knowing the nutritional status of a group of college basketball players (BU) by anthropometric and biochemical parameters. the research provides a non-experimental, descriptive, transversal, with a quantitative approach The sample was selected on a non-probabilistic approach. which included 12 players design. Anthropometric parameters for body mass index (BMI), somatotype and body composition was assessed. For biochemical glucose, triglycerides and cholesterol. have a BMI of 24.6 (kg/m2), are classified as endomesomorfas (5,5-4,3-1,2) have a fat mass 39.9% and 37.8% of muscle mass, glucose values are 68.7 (mg/dl), triglycerides 128 (mg/dl) and 189 cholesterol (mg/dl). the BU have normal values for BMI and biochemical parameters, but dig deeper greater amount of adipose tissue is found as reported by body composition and somatotype, a situation that could be related to poor eating habits, however is required further study to reach a categorical conclusion. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Bhatt, Darshak R; Maheria, Kalpana C; Parikh, Jigisha K
2015-12-30
A simple and new approach in cloud point extraction (CPE) method was developed for removal of picric acid (PA) by the addition of N,N,N,N',N',N'-hexaethyl-ethane-1,2-diammonium dibromide ionic liquid (IL) in non-ionic surfactant Triton X-114 (TX-114). A significant increase in extraction efficiency was found upon the addition of dicationic ionic liquid (DIL) at both nearly neutral and high acidic pH. The effects of different operating parameters such as pH, temperature, time, concentration of surfactant, PA and DIL on extraction of PA were investigated and optimum conditions were established. The extraction mechanism was also proposed. A developed Langmuir isotherm was used to compute the feed surfactant concentration required for the removal of PA up to an extraction efficiency of 90%. The effects of temperature and concentration of surfactant on various thermodynamic parameters were examined. It was found that the values of ΔG° increased with temperature and decreased with surfactant concentration. The values of ΔH° and ΔS° increased with surfactant concentration. The developed approach for DIL mediated CPE has proved to be an efficient and green route for extraction of PA from water sample. Copyright © 2015 Elsevier B.V. All rights reserved.
Enhanced Elliptic Grid Generation
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2007-01-01
An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.
Chamber transport for heavy ion fusion
NASA Astrophysics Data System (ADS)
Olson, Craig L.
2014-01-01
A brief review is given of research on chamber transport for HIF (heavy ion fusion) dating from the first HIF Workshop in 1976 to the present. Chamber transport modes are categorized into ballistic transport modes and channel-like modes. Four major HIF reactor studies are summarized (HIBALL-II, HYLIFE-II, Prometheus-H, OSIRIS), with emphasis on the chamber transport environment. In general, many beams are used to provide the required symmetry and to permit focusing to the required small spots. Target parameters are then discussed, with a summary of the individual heavy ion beam parameters required for HIF. The beam parameters are then classified as to their line charge density and perveance, with special emphasis on the perveance limits for radial space charge spreading, for the space charge limiting current, and for the magnetic (Alfven) limiting current. The major experiments on ballistic transport (SFFE, Sabre beamlets, GAMBLE II, NTX, NDCX) are summarized, with specific reference to the axial electron trapping limit for charge neutralization. The major experiments on channel-like transport (GAMBLE II channel, GAMBLE II self-pinch, LBNL channels, GSI channels) are discussed. The status of current research on HIF chamber transport is summarized, and the value of future NDCX-II transport experiments for the future of HIF is noted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boozer, Allen H., E-mail: ahb17@columbia.edu
2015-03-15
The plasma current in ITER cannot be allowed to transfer from thermal to relativistic electron carriers. The potential for damage is too great. Before the final design is chosen for the mitigation system to prevent such a transfer, it is important that the parameters that control the physics be understood. Equations that determine these parameters and their characteristic values are derived. The mitigation benefits of the injection of impurities with the highest possible atomic number Z and the slowing plasma cooling during halo current mitigation to ≳40 ms in ITER are discussed. The highest possible Z increases the poloidal flux consumptionmore » required for each e-fold in the number of relativistic electrons and reduces the number of high energy seed electrons from which exponentiation builds. Slow cooling of the plasma during halo current mitigation also reduces the electron seed. Existing experiments could test physics elements required for mitigation but cannot carry out an integrated demonstration. ITER itself cannot carry out an integrated demonstration without excessive danger of damage unless the probability of successful mitigation is extremely high. The probability of success depends on the reliability of the theory. Equations required for a reliable Monte Carlo simulation are derived.« less
Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.
Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu
2016-01-01
The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hexagonal boron nitride and water interaction parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yanbin; Aluru, Narayana R., E-mail: aluru@illinois.edu; Wagner, Lucas K.
2016-04-28
The study of hexagonal boron nitride (hBN) in microfluidic and nanofluidic applications at the atomic level requires accurate force field parameters to describe the water-hBN interaction. In this work, we begin with benchmark quality first principles quantum Monte Carlo calculations on the interaction energy between water and hBN, which are used to validate random phase approximation (RPA) calculations. We then proceed with RPA to derive force field parameters, which are used to simulate water contact angle on bulk hBN, attaining a value within the experimental uncertainties. This paper demonstrates that end-to-end multiscale modeling, starting at detailed many-body quantum mechanics andmore » ending with macroscopic properties, with the approximations controlled along the way, is feasible for these systems.« less
Earthquake ground motion: Chapter 3
Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy
2016-01-01
Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.
The frequency response of dynamic friction: Enhanced rate-and-state models
NASA Astrophysics Data System (ADS)
Cabboi, A.; Putelat, T.; Woodhouse, J.
2016-07-01
The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.
Investigation of ultrashort pulse laser ablation of the cornea and hydrogels for eye microsurgery
NASA Astrophysics Data System (ADS)
Girard, Guillaume; Zhou, Sheng; Bigaouette, Nicolas; Brunette, Isabelle; Chaker, Mohamed; Germain, Lucie; Lavertu, Pierre-Luc; Martin, François; Olivié, Gilles; Ozaki, Tsuneyuki; Parent, Mireille; Vidal, François; Kieffer, Jean-Claude
2004-10-01
The Femtosecond laser is a very promising tool for performing accurate dissection in various cornea layers. Clearly, the development of this application requires basic knowledge about laser-tissue interaction. One of the most significant parameter in laser applications is the ablation threshold, defined as the minimal laser energy per unit surface required for ablation. This paper investigates the ablation threshold as a function of the laser pulse duration for two corneal layers (endothelium and epithelium) as well as for hydrogel with different hydration degrees. The measured ablation thresholds prove to behave very differently as a function of the pulse duration for the various materials investigated, although the values obtained for the shortest laser pulses are quite similar. Our experimental results are fitted with a simple model for laser-matter interaction in order to determine some intrinsic physical parameters characterizing each target.
LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands
Dodda, Leela S.
2017-01-01
Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, W.C.; Barrett, D.M.; Sampayan, S.E.
1990-08-06
In this paper we discuss system issues and modeling requirements within the context of energy sweep in an electron linear induction accelerator. When needed, particular parameter values are taken from the ETA-II linear induction accelerator at Lawrence Livermore National Laboratory. For this paper, the most important parameter is energy sweep during a pulse. It is important to have low energy sweep to satisfy the FEL resonance condition and to limit the beam corkscrew motion. It is desired to achieve {Delta}E/E = {plus minus}1% for a 50-ns flattop whereas the present level of performance is {Delta}E/E = {plus minus}1% in 10more » ns. To improve this situation we will identify a number of areas in which modeling could help increase understanding and improve our ability to design linear induction accelerators.« less
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
PyDREAM: high-dimensional parameter inference for biological models in python.
Shockley, Erin M; Vrugt, Jasper A; Lopez, Carlos F; Valencia, Alfonso
2018-02-15
Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis [DREAM(ZS)] algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models. PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM. c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Rendering of HDR content on LDR displays: an objective approach
NASA Astrophysics Data System (ADS)
Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick
2015-09-01
Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wong, T. E.; Noone, D. C.; Kleiber, W.
2014-12-01
The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.
NASA Astrophysics Data System (ADS)
Hrinivich, W. Thomas; Gibson, Eli; Gaed, Mena; Gomez, Jose A.; Moussa, Madeleine; McKenzie, Charles A.; Bauman, Glenn S.; Ward, Aaron D.; Fenster, Aaron; Wong, Eugene
2014-03-01
Purpose: T2 weighted and diffusion weighted magnetic resonance imaging (MRI) show promise in isolating prostate tumours. Dynamic contrast enhanced (DCE)-MRI has also been employed as a component in multi-parametric tumour detection schemes. Model-based parameters such as Ktrans are conventionally used to characterize DCE images and require arterial contrast agent (CR) concentration. A robust parameter map that does not depend on arterial input may be more useful for target volume delineation. We present a dimensionless parameter (Wio) that characterizes CR wash-in and washout rates without requiring arterial CR concentration. Wio is compared to Ktrans in terms of ability to discriminate cancer in the prostate, as demonstrated via comparison with histology. Methods: Three subjects underwent DCE-MRI using gadolinium contrast and 7 s imaging temporal resolution. A pathologist identified cancer on whole-mount histology specimens, and slides were deformably registered to MR images. The ability of Wio maps to discriminate cancer was determined through receiver operating characteristic curve (ROC) analysis. Results: There is a trend that Wio shows greater area under the ROC curve (AUC) than Ktrans with median AUC values of 0.74 and 0.69 respectively, but the difference was not statistically significant based on a Wilcoxon signed-rank test (p = 0.13). Conclusions: Preliminary results indicate that Wio shows potential as a tool for Ktrans QA, showing similar ability to discriminate cancer in the prostate as Ktrans without requiring arterial CR concentration.
Earthquake number forecasts testing
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2017-10-01
We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.
Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4
NASA Astrophysics Data System (ADS)
Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.
2013-12-01
Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).
Natural parameter values for generalized gene adjacency.
Yang, Zhenyu; Sankoff, David
2010-09-01
Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.
Ascent trajectory dispersion analysis for WTR heads-up space shuttle trajectory
NASA Technical Reports Server (NTRS)
1986-01-01
The results of a Space Transportation System ascent trajectory dispersion analysis are discussed. The purpose is to provide critical trajectory parameter values for assessing the Space Shuttle in a heads-up configuration launched from the Western Test Range (STR). This analysis was conducted using a trajectory profile based on a launch from the WTR in December. The analysis consisted of the following steps: (1) nominal trajectories were simulated under the conditions as specified by baseline reference mission guidelines; (2) dispersion trajectories were simulated using predetermined parametric variations; (3) requirements for a system-related composite trajectory were determined by a root-sum-square (RSS) analysis of the positive deviations between values of the aerodynamic heating indicator (AHI) generated by the dispersion and nominal trajectories; (4) using the RSS assessment as a guideline, the system related composite trajectory was simulated by combinations of dispersion parameters which represented major contributors; (5) an assessment of environmental perturbations via a RSS analysis was made by the combination of plus or minus 2 sigma atmospheric density variation and 95% directional design wind dispersions; (6) maximum aerodynamic heating trajectories were simulated by variation of dispersion parameters which would emulate the summation of the system-related RSS and environmental RSS values of AHI. The maximum aerodynamic heating trajectories were simulated consistent with the directional winds used in the environmental analysis.
NASA Astrophysics Data System (ADS)
Özer, Hatice; Delice, Özgür
2018-03-01
Two different ways of generalizing Einstein’s general theory of relativity with a cosmological constant to Brans–Dicke type scalar–tensor theories are investigated in the linearized field approximation. In the first case a cosmological constant term is coupled to a scalar field linearly whereas in the second case an arbitrary potential plays the role of a variable cosmological term. We see that the former configuration leads to a massless scalar field whereas the latter leads to a massive scalar field. General solutions of these linearized field equations for both cases are obtained corresponding to a static point mass. Geodesics of these solutions are also presented and solar system effects such as the advance of the perihelion, deflection of light rays and gravitational redshift were discussed. In general relativity a cosmological constant has no role in these phenomena. We see that for the Brans–Dicke theory, the cosmological constant also has no effect on these phenomena. This is because solar system observations require very large values of the Brans–Dicke parameter and the correction terms to these phenomena becomes identical to GR for these large values of this parameter. This result is also observed for the theory with arbitrary potential if the mass of the scalar field is very light. For a very heavy scalar field, however, there is no such limit on the value of this parameter and there are ranges of this parameter where these contributions may become relevant in these scales. Galactic and intergalactic dynamics is also discussed for these theories at the latter part of the paper with similar conclusions.
NASA Astrophysics Data System (ADS)
Perdana, B. P.; Setiawan, Y.; Prasetyo, L. B.
2018-02-01
Recently, a highway development is required as a liaison between regions to support the economic development of the regions. Even the availability of highways give positive impacts, it also has negative impacts, especially related to the changes of vegetated lands. This study aims to determine the change of vegetation coverage in Jagorawi corridor Jakarta-Bogor during 37 years, and to analyze landscape patterns in the corridor based on distance factor from Jakarta to Bogor. In this study, we used a long-series of Landsat images taken by Landsat 2 MSS (1978), Landsat 5 TM (1988, 1995, and 2005) and Landsat 8 OLI/TIRS (2015). Analysis of landscape metrics was conducted through patch analysis approach to determine the change of landscape patterns in the Jagorawi corridor Jakarta-Bogor. Several parameters of landscape metrics used are Number of Patches (NumP), Mean Patch Size (MPS), Mean Shape Index (MSI), and Edge Density (ED). These parameters can be used to provide information of structural elements of landscape, composition and spatial distribution in the corridor. The results indicated that vegetation coverage in the Jagorawi corridor Jakarta-Bogor decreased about 48% for 35 years. Moreover, NumP value increased and decreasing of MPS value as a means of higher fragmentation level occurs with patch size become smaller. Meanwhile, The increase in ED parameters indicates that vegetated land is damaged annually. MSI parameter shows a decrease in every year which means land degradation on vegetated land. This indicates that the declining value of MSI will have an impact on land degradation.
NASA Astrophysics Data System (ADS)
Bilge, Gonca; Sezer, Banu; Boyaci, Ismail Hakki; Eseller, Kemal Efe; Berberoglu, Halil
2018-07-01
Liquid analysis by using LIBS is a complicated process due to difficulties encountered during the collection of light and formation of plasma in liquid. To avoid these, some applications are performed such as aerosol formation and transforming liquid into solid state. However, performance of LIBS in liquid samples still remains a challenging issue. In this study, performance evaluation of LIBS and parameter optimizations in liquid and solid phase samples were performed. For this purpose, milk was chosen as model sample; milk powder was used as solid sample, and milk was used as liquid sample in the experiments. Different experimental setups have been constructed for each sampling technique, and optimizations were performed to determine suitable parameters such as delay time, laser energy, repetition rate and speed of rotary table for solid sampling technique, and flow rate of carrier gas for liquid sampling technique. Target element was determined as Ca, which is a critically important element in milk for determining its nutritional value and Ca addition. In optimum parameters, limit of detection (LOD), limit of quantification (LOQ) and relative standard deviation (RSD) values were calculated as 0.11%, 0.36% and 8.29% respectively for milk powders samples; while LOD, LOQ and RSD values were calculated as 0.24%, 0.81%, and 10.93% respectively for milk samples. It can be said that LIBS is an applicable method in both liquid and solid samples with suitable systems and parameters. However, liquid analysis requires much more developed systems for more accurate results.
Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?
Sweet, Julia; Brzezinski, Mark A.; McNair, Heather M.; Passow, Uta
2016-01-01
Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification. PMID:27893739
Evaluating Carbonate System Algorithms in a Nearshore System: Does Total Alkalinity Matter?
Jones, Jonathan M; Sweet, Julia; Brzezinski, Mark A; McNair, Heather M; Passow, Uta
2016-01-01
Ocean acidification is a threat to many marine organisms, especially those that use calcium carbonate to form their shells and skeletons. The ability to accurately measure the carbonate system is the first step in characterizing the drivers behind this threat. Due to logistical realities, regular carbonate system sampling is not possible in many nearshore ocean habitats, particularly in remote, difficult-to-access locations. The ability to autonomously measure the carbonate system in situ relieves many of the logistical challenges; however, it is not always possible to measure the two required carbonate parameters autonomously. Observed relationships between sea surface salinity and total alkalinity can frequently provide a second carbonate parameter thus allowing for the calculation of the entire carbonate system. Here, we assessed the rigor of estimating total alkalinity from salinity at a depth <15 m by routinely sampling water from a pier in southern California for several carbonate system parameters. Carbonate system parameters based on measured values were compared with those based on estimated TA values. Total alkalinity was not predictable from salinity or from a combination of salinity and temperature at this site. However, dissolved inorganic carbon and the calcium carbonate saturation state of these nearshore surface waters could both be estimated within on average 5% of measured values using measured pH and salinity-derived or regionally averaged total alkalinity. Thus we find that the autonomous measurement of pH and salinity can be used to monitor trends in coastal changes in DIC and saturation state and be a useful method for high-frequency, long-term monitoring of ocean acidification.
Experimental and numerical determination of the static critical pressure in ferrofluid seals
NASA Astrophysics Data System (ADS)
Horak, W.; Szczęch, M.
2013-02-01
Ferrofluids have various engineering applications; one of them are magnetic fluid seals for rotating shafts. There are various constructions of this type of seals, but the main difference is the number of sealing stages. The development of this construction is a complex process which requires knowledge of ferrofluid physical and rheological properties and the magnetic field distribution inside the sealing gap. One of the most important parameters of ferrofluid seals is the critical (burst) pressure. It is the pressure value at which a leak will occur. This study presents results of numerical simulation of magnetic field distribution inside the seal gap and calculations of the critical pressure value. The obtained pressure values were verified by experiments.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Chaos control of Hastings-Powell model by combining chaotic motions.
Danca, Marius-F; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Chaos control of Hastings-Powell model by combining chaotic motions
NASA Astrophysics Data System (ADS)
Danca, Marius-F.; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Sanz-Peláez, O; Angel-Moreno, A; Tapia-Martín, M; Conde-Martel, A; Carranza-Rodríguez, C; Carballo-Rastrilla, S; Soria-López, A; Pérez-Arellano, J L
2008-09-01
The progressive increase in the number of immigrants to Spain in recent years has made it necessary for health-care professionals to be aware about the specific characteristics of this population. An attempt is made in this study to define the normal range of common laboratory values in healthy sub-Saharan adults. Common laboratory values were studied (blood cell counts, clotting tests and blood biochemistry values) and were measured in 150 sub-Saharan immigrants previously defined as healthy according to a complete health evaluation that included a clinical history, physical examination, serologic tests and study of stool parasites. These results were compared to those from a control group consisting of 81 age-and-sex matched healthy blood donors taken from the Spanish native population. Statistically significant differences were obtained in the following values. Mean corpuscular volume (MCV), red cell distribution width (RDW), total leukocytes, and serum levels of creatinine, uric acid, total protein content, creatin-kinase (CK), aspartate aminotransferase (AST), gamma-glutamyl-transpeptidase (GGT), Immunoglobulin G (IgG) and M (IgM). If evaluated according to the normal values in native people, a considerable percentage of healthy sub-Saharan immigrants would present
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand
2001-01-01
3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...
Long-term solar-terrestrial observations
NASA Technical Reports Server (NTRS)
1988-01-01
The results of an 18-month study of the requirements for long-term monitoring and archiving of solar-terrestrial data is presented. The value of long-term solar-terrestrial observations is discussed together with parameters, associated measurements, and observational problem areas in each of the solar-terrestrial links (the sun, the interplanetary medium, the magnetosphere, and the thermosphere-ionosphere). Some recommendations are offered for coordinated planning for long-term solar-terrestrial observations.
Analysis of Cryogenic Cycle with Process Modeling Tool: Aspen HYSYS
NASA Astrophysics Data System (ADS)
Joshi, D. M.; Patel, H. K.
2015-10-01
Cryogenic engineering deals with the development and improvement of low temperature techniques, processes and equipment. A process simulator such as Aspen HYSYS, for the design, analysis, and optimization of process plants, has features that accommodate the special requirements and therefore can be used to simulate most cryogenic liquefaction and refrigeration processes. Liquefaction is the process of cooling or refrigerating a gas to a temperature below its critical temperature so that liquid can be formed at some suitable pressure which is below the critical pressure. Cryogenic processes require special attention in terms of the integration of various components like heat exchangers, Joule-Thompson Valve, Turbo expander and Compressor. Here, Aspen HYSYS, a process modeling tool, is used to understand the behavior of the complete plant. This paper presents the analysis of an air liquefaction plant based on the Linde cryogenic cycle, performed using the Aspen HYSYS process modeling tool. It covers the technique used to find the optimum values for getting the maximum liquefaction of the plant considering different constraints of other parameters. The analysis result so obtained gives clear idea in deciding various parameter values before implementation of the actual plant in the field. It also gives an idea about the productivity and profitability of the given configuration plant which leads to the design of an efficient productive plant.
NASA Astrophysics Data System (ADS)
Schuster, Norbert; Franks, John
2011-06-01
In the 8-12 micron waveband Focal Plane Arrays (FPA) are available with a 17 micron pixel pitch in different arrays sizes (e.g. 512 x 480 pixels and 320 x 240 pixels) and with excellent electrical properties. Many applications become possible using this new type of IR-detector which will become the future standard in uncooled technology. Lenses with an f-number faster than f/1.5 minimize the diffraction impact on the spatial resolution and guarantee a high thermal resolution for uncooled cameras. Both effects will be quantified. The distinction between Traditional f-number (TF) and Radiometric f-number (RF) is discussed. Lenses with different focal lengths are required for applications in a variety of markets. They are classified by their Horizontal field of view (HFOV). Respecting the requirements for high volume markets, several two lens solutions will be discussed. A commonly accepted parameter of spatial resolution is the Modulation Transfer Function (MTF)-value at the Nyquist frequency of the detector (here 30cy/mm). This parameter of resolution will be presented versus field of view. Wide Angle and Super Wide Angle lenses are susceptible to low relative illumination in the corner of the detector. Measures to reduce this drop to an acceptable value are presented.
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
NASA Astrophysics Data System (ADS)
Barsuk, Alexandr A.; Paladi, Florentin
2018-04-01
The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Physiological Parameter Response to Variation of Mental Workload.
Marinescu, Adrian Cornelius; Sharples, Sarah; Ritchie, Alastair Campbell; Sánchez López, Tomas; McDowell, Michael; Morvan, Hervé P
2018-02-01
To examine the relationship between experienced mental workload and physiological response by noninvasive monitoring of physiological parameters. Previous studies have examined how individual physiological measures respond to changes in mental demand and subjective reports of workload. This study explores the response of multiple physiological parameters and quantifies their added value when estimating the level of demand. The study presented was conducted in laboratory conditions and required participants to perform a visual-motor task that imposed varying levels of demand. The data collected consisted of physiological measurements (heart interbeat intervals, breathing rate, pupil diameter, facial thermography), subjective ratings of workload (Instantaneous Self-Assessment Workload Scale [ISA] and NASA-Task Load Index), and the performance. Facial thermography and pupil diameter were demonstrated to be good candidates for noninvasive workload measurements: For seven out of 10 participants, pupil diameter showed a strong correlation ( R values between .61 and .79 at a significance value of .01) with mean ISA normalized values. Facial thermography measures added on average 47.7% to the amount of variability in task performance explained by a regression model. As with the ISA ratings, the relationship between the physiological measures and performance showed strong interparticipant differences, with some individuals demonstrating a much stronger relationship between workload and performance measures than others. The results presented in this paper demonstrate that physiological and pupil diameter can be used for noninvasive real-time measurement of workload. The methods presented in this article, with current technological capabilities, are better suited for workplaces where the person is seated, offering the possibility of being applied to pilots and air traffic controllers.
Parsimony and goodness-of-fit in multi-dimensional NMR inversion
NASA Astrophysics Data System (ADS)
Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos
2017-01-01
Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Farhan, Y.H.; Scow, K.M.; Fan, S.
Trichloroethylene (TCE) biodegradation in soil under aerobic conditions requires the presence of another compound, such as toluene, to support growth of microbial populations and enzyme induction. The biodegradation kinetics of TCE and toluene were examined by conducting three groups of experiments in soil: toluene only, toluene combined with low TCE concentrations, and toluene with TCE concentrations similar to or higher than toluene. The biodegradation of TCE and toluene and their interrelationships were modeled using a combination of several biodegradation functions. In the model, the pollutants were described as existing in the solid, liquid, and gas phases of soil, with biodegradationmore » occurring only in the liquid phase. The distribution of the chemicals between the solid and liquid phase was described by a linear sorption isotherm, whereas liquid-vapor partitioning was described by Henry's law. Results from 12 experiments with toluene only could be described by a single set of kinetic parameters. The same set of parameters could describe toluene degradation in 10 experiments where low TCE concentrations were present. From these 10 experiments a set of parameters describing TCE cometabolism induced by toluene also was obtained. The complete set of parameters was used to describe the biodegradation of both compounds in 15 additional experiments, where significant TCE toxicity and inhibition effects were expected. Toluene parameters were similar to values reported for pure culture systems. Parameters describing the interaction of TCE with toluene and biomass were different from reported values for pure cultures, suggesting that the presence of soil may have affected the cometabolic ability of the indigenous soil microbial populations.« less
Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic
NASA Astrophysics Data System (ADS)
Haag, T.; Herrmann, J.; Hanss, M.
2010-10-01
For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
Verdurmen, Kim M J; Warmerdam, Guy J J; Lempersz, Carlijn; Hulsenboom, Alexandra D J; Renckens, Joris; Dieleman, Jeanne P; Vullings, Rik; van Laar, Judith O E H; Oei, S Guid
2018-04-01
Betamethasone is widely used to enhance fetal lung maturation in case of threatened preterm labour. Fetal heart rate variability is one of the most important parameters to assess in fetal monitoring, since it is a reliable indicator for fetal distress. To describe the effect of betamethasone on fetal heart rate variability, by applying spectral analysis on non-invasive fetal electrocardiogram recordings. Prospective cohort study. Patients that require betamethasone, with a gestational age from 24 weeks onwards. Fetal heart rate variability parameters on day 1, 2, and 3 after betamethasone administration are compared to a reference measurement. Following 68 inclusions, 12 patients remained with complete series of measurements and sufficient data quality. During day 1, an increase in absolute fetal heart rate variability values was seen. During day 2, a decrease in these values was seen. All trends indicate to return to pre-medication values on day 3. Normalised high- and low-frequency power show little changes during the study period. The changes in fetal heart rate variability following betamethasone administration show the same pattern when calculated by spectral analysis of the fetal electrocardiogram, as when calculated by cardiotocography. Since normalised spectral values show little changes, the influence of autonomic modulation seems minor. Copyright © 2018 Elsevier B.V. All rights reserved.
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
Jeong, Yeseul; Jang, Nulee; Yasin, Muhammad; Park, Shinyoung; Chang, In Seop
2016-02-01
This study determines and compares the intrinsic kinetic parameters (Ks and Ki) of selected Thermococcus onnurineus NA1 strains (wild-type (WT), and mutants MC01, MC02, and WTC156T) using the substrate inhibition model. Ks and Ki values were used to find the optimum dissolved CO (CL) conditions inside the reactor. The results showed that in terms of the maximum specific CO consumption rates (qCO(max)) of WT, MC01, MC02, and WTC156T the optimum activities can be achieved by maintaining the CL levels at 0.56mM, 0.52mM, 0.58mM, and 0.75mM, respectively. The qCO(max) value of WTC156T at 0.75mM was found to be 1.5-fold higher than for the WT strain, confirming its superiority. Kinetic modeling was then used to predict the conditions required to maintain the optimum CL levels and high cell concentrations in the reactor, based on the kinetic parameters of the WTC156T strain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Applications of a general random-walk theory for confined diffusion.
Calvo-Muñoz, Elisa M; Selvan, Myvizhi Esai; Xiong, Ruichang; Ojha, Madhusudan; Keffer, David J; Nicholson, Donald M; Egami, Takeshi
2011-01-01
A general random walk theory for diffusion in the presence of nanoscale confinement is developed and applied. The random-walk theory contains two parameters describing confinement: a cage size and a cage-to-cage hopping probability. The theory captures the correct nonlinear dependence of the mean square displacement (MSD) on observation time for intermediate times. Because of its simplicity, the theory also requires modest computational requirements and is thus able to simulate systems with very low diffusivities for sufficiently long time to reach the infinite-time-limit regime where the Einstein relation can be used to extract the self-diffusivity. The theory is applied to three practical cases in which the degree of order in confinement varies. The three systems include diffusion of (i) polyatomic molecules in metal organic frameworks, (ii) water in proton exchange membranes, and (iii) liquid and glassy iron. For all three cases, the comparison between theory and the results of molecular dynamics (MD) simulations indicates that the theory can describe the observed diffusion behavior with a small fraction of the computational expense. The confined-random-walk theory fit to the MSDs of very short MD simulations is capable of accurately reproducing the MSDs of much longer MD simulations. Furthermore, the values of the parameter for cage size correspond to the physical dimensions of the systems and the cage-to-cage hopping probability corresponds to the activation barrier for diffusion, indicating that the two parameters in the theory are not simply fitted values but correspond to real properties of the physical system.
de Moura Bell, Juliana M L N; Aquino, Leticia F M C; Liu, Yan; Cohen, Joshua L; Lee, Hyeyoung; de Melo Silva, Vitor L; Rodrigues, Maria I; Barile, Daniela
2016-08-01
Enzymatic hydrolysis of lactose has been shown to improve the efficiency and selectivity of membrane-based separations toward the recovery of bioactive oligosaccharides. Achieving maximum lactose hydrolysis requires intrinsic process optimization for each specific substrate, but the effects of those processing conditions on the target oligosaccharides are not well understood. Response surface methodology was used to investigate the effects of pH (3.25-8.25), temperature (35-55°C), reaction time (6 to 58 min), and amount of enzyme (0.05-0.25%) on the efficiency of lactose hydrolysis by β-galactosidase and on the preservation of biologically important sialyloligosaccharides (3'-siallylactose, 6'-siallylactose, and 6'-sialyl-N-acetyllactosamine) naturally present in bovine colostrum whey permeate. A central composite rotatable design was used. In general, β-galactosidase activity was favored at pH values ranging from 3.25 to 5.75, with other operational parameters having a less pronounced effect. A pH of 4.5 allowed for the use of a shorter reaction time (19 min), lower temperature (40°C), and reduced amount of enzyme (0.1%), but complete hydrolysis at a higher pH (5.75) required greater values for these operational parameters. The total amount of sialyloligosaccharides was not significantly altered by the reaction parameters evaluated, suggesting specificity of β-galactosidase from Aspergillus oryzae toward lactose as well as the stability of the oligosaccharides at pH, temperature, and reaction time evaluated. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.
2014-12-01
Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.
Minsley, Burke J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.
Agreement in cardiovascular risk rating based on anthropometric parameters
Dantas, Endilly Maria da Silva; Pinto, Cristiane Jordânia; Freitas, Rodrigo Pegado de Abreu; de Medeiros, Anna Cecília Queiroz
2015-01-01
Objective To investigate the agreement in evaluation of risk of developing cardiovascular diseases based on anthropometric parameters in young adults. Methods The study included 406 students, measuring weight, height, and waist and neck circumferences. Waist-to-height ratio and the conicity index. The kappa coefficient was used to assess agreement in risk classification for cardiovascular diseases. The positive and negative specific agreement values were calculated as well. The Pearson chi-square (χ2) test was used to assess associations between categorical variables (p<0.05). Results The majority of the parameters assessed (44%) showed slight (k=0.21 to 0.40) and/or poor agreement (k<0.20), with low values of negative specific agreement. The best agreement was observed between waist circumference and waist-to-height ratio both for the general population (k=0.88) and between sexes (k=0.93 to 0.86). There was a significant association (p<0.001) between the risk of cardiovascular diseases and females when using waist circumference and conicity index, and with males when using neck circumference. This resulted in a wide variation in the prevalence of cardiovascular disease risk (5.5%-36.5%), depending on the parameter and the sex that was assessed. Conclusion The results indicate variability in agreement in assessing risk for cardiovascular diseases, based on anthropometric parameters, and which also seems to be influenced by sex. Further studies in the Brazilian population are required to better understand this issue. PMID:26466060
2012-01-01
Background Pulsed wave (PW) Doppler echocardiography has become a routine non invasive cardiac diagnostic tool in most species. However, evaluation of intracardiac blood flow requires reference values, which are poorly documented in goats. The aim of this study was to test the repeatability, the variability, and to establish the reference values of PW measurements in healthy adult Saanen goats. Using a standardised PW Doppler echocardiographic protocol, 10 healthy adult unsedated female Saanen goats were investigated three times at one day intervals by the same observer. Mitral, tricuspid, aortic and pulmonary flows were measured from a right parasternal view, and mitral and aortic flows were also measured from a left parasternal view. The difference between left and right side measurements and the intra-observer inter-day repeatability were tested and then the reference values of PW Doppler echocardiographic parameters in healthy adult female Saanen goats were established. Results As documented in other species, all caprine PW Doppler parameters demonstrated a poor inter-day repeatability and a moderate variability. Tricuspid and pulmonary flows were best evaluated on the right side whereas mitral and aortic flows were best obtained on the left side, and reference values are reported for healthy adult Saanen goats. Conclusions PW Doppler echocardiography allows the measurement of intracardiac blood flow indices in goats. The reference values establishment will help interpreting these indices of cardiac function in clinical cardiac cases and developing animal models for human cardiology research. PMID:23067875
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
NASA Astrophysics Data System (ADS)
Gravestijn, R. M.; Drake, J. R.; Hedqvist, A.; Rachlew, E.
2004-01-01
A loop voltage is required to sustain the reversed-field pinch (RFP) equilibrium. The configuration is characterized by redistribution of magnetic helicity but with the condition that the total helicity is maintained constant. The magnetic field shell penetration time, tgrs, has a critical role in the stability and performance of the RFP. Confinement in the EXTRAP device has been studied with two values of tgrs, first (EXTRAP-T2) with tgrs of the order of the typical relaxation cycle timescale and then (EXTRAP-T2R) with tgrs much longer than the relaxation cycle timescale, but still much shorter than the pulse length. Plasma parameters show significant improvements in confinement in EXTRAP-T2R. The typical loop voltage required to sustain comparable electron poloidal beta values is a factor of 3 lower in the EXTRAP-T2R device. The improvement is attributed to reduced magnetic turbulence.
Finding the bottom and using it
Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.
2014-01-01
Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346
A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2010-01-01
Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/(kg/sec), show the dimensional consistency of overall results.
A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2010-01-01
Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/ (kg/sec), show the dimensional consistency of overall results.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Arnold, James O. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
NASA Technical Reports Server (NTRS)
Scalzo, F.
1983-01-01
Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2017-11-01
Chain Sampling Plan is widely used whenever a small sample attributes plan is required to be used for situations involving destructive products coming out of continuous production process [1, 2]. This paper presents a procedure for the construction and selection of a ChSP-1 by attributes inspection based on membership functions [3]. A procedure using search technique is developed for obtaining the parameters of single sampling plan for a given set of AQL and LQL values. A sample of tables providing ChSP-1 plans for various combinations of AQL and LQL values are presented [4].
Determination of power system component parameters using nonlinear dead beat estimation method
NASA Astrophysics Data System (ADS)
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.
Chaos control of Hastings–Powell model by combining chaotic motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in
2016-04-15
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
Investigation of statistical iterative reconstruction for dedicated breast CT
Makeev, Andrey; Glick, Stephen J.
2013-01-01
Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Suspension parameter estimation in the frequency domain using a matrix inversion approach
NASA Astrophysics Data System (ADS)
Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.
2011-12-01
The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.
Selection of regularization parameter for l1-regularized damage detection
NASA Astrophysics Data System (ADS)
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
Development of a new score to estimate clinical East Coast Fever in experimentally infected cattle.
Schetters, Th P M; Arts, G; Niessen, R; Schaap, D
2010-02-10
East Coast Fever is a tick-transmitted disease in cattle caused by Theileria parva protozoan parasites. Quantification of the clinical disease can be done by determining a number of variables, derived from parasitological, haematological and rectal temperature measurements as described by Rowlands et al. (2000). From a total of 13 parameters a single ECF-score is calculated that allows categorization of infected cattle in five different classes that correlate with the severity of clinical signs. This score is complicated not only by the fact that it requires estimation of 13 parameters but also because of the subsequent mathematics. The fact that the values are normalised over a range of 0-10 for each experiment makes it impossible to compare results from different experiments. Here we present an alternative score based on the packed cell volume and the number of piroplasms in the circulation and that is calculated using a simple equation; ECF-score=PCV(relday0)/log(PE+10). In this equation the packed cell volume is expressed as a value relative to that of the day on infection (PCV(relday0)) and the number of piroplasms is expressed as the logarithmic value of the number of infected red blood cells (=PE) in a total of 1000 red blood cells. To allow PE to be 0, +10 is added in the denominator. We analysed a data set of 54 cattle from a previous experiment and found a statistically significant linear correlation between the ECF-score value reached during the post-infection period and the Rowlands' score value. The new score is much more practical than the Rowlands score as it only requires daily blood sampling. From these blood samples both PCV and number of piroplasms can be determined, and the score can be calculated daily. This allows monitoring the development of ECF after infection, which was hitherto not possible. In addition, the new score allows for easy comparison of results from different experiments.
Computational substrates of social value in interpersonal collaboration.
Fareri, Dominic S; Chang, Luke J; Delgado, Mauricio R
2015-05-27
Decisions to engage in collaborative interactions require enduring considerable risk, yet provide the foundation for building and maintaining relationships. Here, we investigate the mechanisms underlying this process and test a computational model of social value to predict collaborative decision making. Twenty-six participants played an iterated trust game and chose to invest more frequently with their friends compared with a confederate or computer despite equal reinforcement rates. This behavior was predicted by our model, which posits that people receive a social value reward signal from reciprocation of collaborative decisions conditional on the closeness of the relationship. This social value signal was associated with increased activity in the ventral striatum and medial prefrontal cortex, which significantly predicted the reward parameters from the social value model. Therefore, we demonstrate that the computation of social value drives collaborative behavior in repeated interactions and provide a mechanistic account of reward circuit function instantiating this process. Copyright © 2015 the authors 0270-6474/15/358170-11$15.00/0.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
GASPLOT - A computer graphics program that draws a variety of thermophysical property charts
NASA Technical Reports Server (NTRS)
Trivisonno, R. J.; Hendricks, R. C.
1977-01-01
A FORTRAN V computer program, written for the UNIVAC 1100 series, is used to draw a variety of precision thermophysical property charts on the Calcomp plotter. In addition to the program (GASPLOT), which requires (15 160) sub 10 storages, a thermophysical properties routine needed to produce plots. The program is designed so that any two of the state variables, the derived variables, or the transport variables may be plotted as the ordinate - abscissa pair with as many as five parametric variables. The parameters may be temperature, pressure, density, enthalpy, and entropy. Each parameter may have as many a 49 values, and the range of the variables is limited only by the thermophysical properties routine.
Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.
2016-01-01
PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1978-01-01
A description is presented of six simulation cases investigating the effect of the variation of static-dynamic Coulomb friction on servo system stability/performance. The upper and lower levels of dynamic Coulomb friction which allowed operation within requirements were determined roughly to be three times and 50% respectively of nominal values considered in a table. A useful application for the nonlinear time response simulation is the sensitivity analysis of final hardware design with respect to such system parameters as cannot be varied realistically or easily in the actual hardware. Parameters of the static/dynamic Coulomb friction fall in this category.
Local operators in kinetic wealth distribution
NASA Astrophysics Data System (ADS)
Andrecut, M.
2016-05-01
The statistical mechanics approach to wealth distribution is based on the conservative kinetic multi-agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. Here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. We also show numerically that in order to generate the power-law tail, an heterogeneous risk aversion model is required. By changing the parameters of these operators, one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity
Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin
2016-01-01
An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.