Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
Perturbing engine performance measurements to determine optimal engine control settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu
2012-05-01
Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach.
Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago
2015-01-01
Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel.
Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach
Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago
2015-01-01
Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel. PMID:26457415
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Theoretical performance analysis of doped optical fibers based on pseudo parameters
NASA Astrophysics Data System (ADS)
Karimi, Maryam; Seraji, Faramarz E.
2010-09-01
Characterization of doped optical fibers (DOFs) is an essential primary stage for design of DOF-based devices. This paper presents design of novel measurement techniques to determine DOFs parameters using mono-beam propagation in a low-loss medium by generating pseudo parameters for the DOFs. The designed techniques are able to characterize simultaneously the absorption, emission cross-sections (ACS and ECS), and dopant concentration of DOFs. In both the proposed techniques, we assume pseudo parameters for the DOFs instead of their actual values and show that the choice of these pseudo parameters values for design of DOF-based devices, such as erbium-doped fiber amplifier (EDFA), are appropriate and the resulting error is quite negligible when compared with the actual parameters values.Utilization of pseudo ACS and ECS values in design procedure of EDFAs does not require the measurement of background loss coefficient (BLC) and makes the rate equation of the DOFs simple. It is shown that by using the pseudo parameters values obtained by the proposed techniques, the error in the gain of a designed EDFA with a BLC of about 1 dB/km, are about 0.08 dB. It is further indicated that the same scenario holds good for BLC lower than 5 dB/m and higher than 12 dB/m. The proposed characterization techniques have simple procedures and are low cost that can have an advantageous use in manufacturing of the DOFs.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
NWP model forecast skill optimization via closure parameter variations
NASA Astrophysics Data System (ADS)
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Optimized microsystems-enabled photovoltaics
Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.
2015-09-22
Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.
Optimizing the availability of a buffered industrial process
Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.
2004-08-24
A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Reliability analysis of a sensitive and independent stabilometry parameter set
Nagymáté, Gergely; Orlovits, Zsanett
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-01
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis. PMID:26805819
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-21
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.
Determining "small parameters" for quasi-steady state
NASA Astrophysics Data System (ADS)
Goeke, Alexandra; Walcher, Sebastian; Zerz, Eva
2015-08-01
For a parameter-dependent system of ordinary differential equations we present a systematic approach to the determination of parameter values near which singular perturbation scenarios (in the sense of Tikhonov and Fenichel) arise. We call these special values Tikhonov-Fenichel parameter values. The principal application we intend is to equations that describe chemical reactions, in the context of quasi-steady state (or partial equilibrium) settings. Such equations have rational (or even polynomial) right-hand side. We determine the structure of the set of Tikhonov-Fenichel parameter values as a semi-algebraic set, and present an algorithmic approach to their explicit determination, using Groebner bases. Examples and applications (which include the irreversible and reversible Michaelis-Menten systems) illustrate that the approach is rather easy to implement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Laboratory R-value vs. in-situ NDT methods.
DOT National Transportation Integrated Search
2006-05-01
The New Mexico Department of Transportation (NMDOT) uses the Resistance R-Value as a quantifying parameter in subgrade and base course design. The parameter represents soil strength and stiffness and ranges from 1 to 80, 80 being typical of the highe...
Models for estimating photosynthesis parameters from in situ production profiles
NASA Astrophysics Data System (ADS)
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.
Study of parameters of the nearest neighbour shared algorithm on clustering documents
NASA Astrophysics Data System (ADS)
Mustika Rukmi, Alvida; Budi Utomo, Daryono; Imro’atus Sholikhah, Neni
2018-03-01
Document clustering is one way of automatically managing documents, extracting of document topics and fastly filtering information. Preprocess of clustering documents processed by textmining consists of: keyword extraction using Rapid Automatic Keyphrase Extraction (RAKE) and making the document as concept vector using Latent Semantic Analysis (LSA). Furthermore, the clustering process is done so that the documents with the similarity of the topic are in the same cluster, based on the preprocesing by textmining performed. Shared Nearest Neighbour (SNN) algorithm is a clustering method based on the number of "nearest neighbors" shared. The parameters in the SNN Algorithm consist of: k nearest neighbor documents, ɛ shared nearest neighbor documents and MinT minimum number of similar documents, which can form a cluster. Characteristics The SNN algorithm is based on shared ‘neighbor’ properties. Each cluster is formed by keywords that are shared by the documents. SNN algorithm allows a cluster can be built more than one keyword, if the value of the frequency of appearing keywords in document is also high. Determination of parameter values on SNN algorithm affects document clustering results. The higher parameter value k, will increase the number of neighbor documents from each document, cause similarity of neighboring documents are lower. The accuracy of each cluster is also low. The higher parameter value ε, caused each document catch only neighbor documents that have a high similarity to build a cluster. It also causes more unclassified documents (noise). The higher the MinT parameter value cause the number of clusters will decrease, since the number of similar documents can not form clusters if less than MinT. Parameter in the SNN Algorithm determine performance of clustering result and the amount of noise (unclustered documents ). The Silhouette coeffisient shows almost the same result in many experiments, above 0.9, which means that SNN algorithm works well with different parameter values.
NASA Astrophysics Data System (ADS)
Dehghani, H.; Ataee-Pour, M.
2012-12-01
The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.
Computational tools for fitting the Hill equation to dose-response curves.
Gadagkar, Sudhindra R; Call, Gerald B
2015-01-01
Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has the option to simulate 500 response values based on the range of values of the dose variable in the original data and the fit of the Hill equation to that data. Copyright © 2014. Published by Elsevier Inc.
Watershed-based Morphometric Analysis: A Review
NASA Astrophysics Data System (ADS)
Sukristiyanti, S.; Maria, R.; Lestiana, H.
2018-02-01
Drainage basin/watershed analysis based on morphometric parameters is very important for watershed planning. Morphometric analysis of watershed is the best method to identify the relationship of various aspects in the area. Despite many technical papers were dealt with in this area of study, there is no particular standard classification and implication of each parameter. It is very confusing to evaluate a value of every morphometric parameter. This paper deals with the meaning of values of the various morphometric parameters, with adequate contextual information. A critical review is presented on each classification, the range of values, and their implications. Besides classification and its impact, the authors also concern about the quality of input data, either in data preparation or scale/the detail level of mapping. This review paper hopefully can give a comprehensive explanation to assist the upcoming research dealing with morphometric analysis.
Articular cartilage degeneration classification by means of high-frequency ultrasound.
Männicke, N; Schöne, M; Oelze, M; Raum, K
2014-10-01
To date only single ultrasound parameters were regarded in statistical analyses to characterize osteoarthritic changes in articular cartilage and the potential benefit of using parameter combinations for characterization remains unclear. Therefore, the aim of this work was to utilize feature selection and classification of a Mankin subset score (i.e., cartilage surface and cell sub-scores) using ultrasound-based parameter pairs and investigate both classification accuracy and the sensitivity towards different degeneration stages. 40 punch biopsies of human cartilage were previously scanned ex vivo with a 40-MHz transducer. Ultrasound-based surface parameters, as well as backscatter and envelope statistics parameters were available. Logistic regression was performed with each unique US parameter pair as predictor and different degeneration stages as response variables. The best ultrasound-based parameter pair for each Mankin subset score value was assessed by highest classification accuracy and utilized in receiver operating characteristics (ROC) analysis. The classifications discriminating between early degenerations yielded area under the ROC curve (AUC) values of 0.94-0.99 (mean ± SD: 0.97 ± 0.03). In contrast, classifications among higher Mankin subset scores resulted in lower AUC values: 0.75-0.91 (mean ± SD: 0.84 ± 0.08). Variable sensitivities of the different ultrasound features were observed with respect to different degeneration stages. Our results strongly suggest that combinations of high-frequency ultrasound-based parameters exhibit potential to characterize different, particularly very early, degeneration stages of hyaline cartilage. Variable sensitivities towards different degeneration stages suggest that a concurrent estimation of multiple ultrasound-based parameters is diagnostically valuable. In-vivo application of the present findings is conceivable in both minimally invasive arthroscopic ultrasound and high-frequency transcutaneous ultrasound. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.
Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver
2016-01-01
The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.
Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts
2015-07-01
uncertainty analyses throughout the lifecycle of planning, designing, and operating of Civil Works flood risk management projects as described in...value 95% of the time. In the frequentist approach to PE, model parameters area regarded as having true values, and their estimate is based on the...in catchment models. 1. Evaluating parameter uncertainty. Water Resources Research 19(5):1151–1172. Lee, P. M. 2012. Bayesian statistics: An
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.
1982-04-01
The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.
Assessing the quality of life history information in publicly available databases.
Thorson, James T; Cope, Jason M; Patrick, Wesley S
2014-01-01
Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... value shall be the best available estimate(s) of the parameter(s), based on all available process data or data used for accounting purposes. (c) For each missing value collected during the performance test (hourly CO2 concentration, stack gas volumetric flow rate, or average process vent flow from mine...
NASA Astrophysics Data System (ADS)
Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan
2015-06-01
Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.
NASA Astrophysics Data System (ADS)
Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian
2016-12-01
Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Bütof, Rebecca; Hofheinz, Frank; Zöphel, Klaus; Stadelmann, Tobias; Schmollack, Julia; Jentsch, Christina; Löck, Steffen; Kotzerke, Jörg; Baumann, Michael; van den Hoff, Jörg
2015-08-01
Despite ongoing efforts to develop new treatment options, the prognosis for patients with inoperable esophageal carcinoma is still poor and the reliability of individual therapy outcome prediction based on clinical parameters is not convincing. The aim of this work was to investigate whether PET can provide independent prognostic information in such a patient group and whether the tumor-to-blood standardized uptake ratio (SUR) can improve the prognostic value of tracer uptake values. (18)F-FDG PET/CT was performed in 130 consecutive patients (mean age ± SD, 63 ± 11 y; 113 men, 17 women) with newly diagnosed esophageal cancer before definitive radiochemotherapy. In the PET images, the metabolically active tumor volume (MTV) of the primary tumor was delineated with an adaptive threshold method. The blood standardized uptake value (SUV) was determined by manually delineating the aorta in the low-dose CT. SUR values were computed as the ratio of tumor SUV and blood SUV. Uptake values were scan-time-corrected to 60 min after injection. Univariate Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), distant metastases-free survival (DM), and locoregional tumor control (LRC) was performed. Additionally, a multivariate Cox regression including clinically relevant parameters was performed. In multivariate Cox regression with respect to OS, including T stage, N stage, and smoking state, MTV- and SUR-based parameters were significant prognostic factors for OS with similar effect size. Multivariate analysis with respect to DM revealed smoking state, MTV, and all SUR-based parameters as significant prognostic factors. The highest hazard ratios (HRs) were found for scan-time-corrected maximum SUR (HR = 3.9) and mean SUR (HR = 4.4). None of the PET parameters was associated with LRC. Univariate Cox regression with respect to LRC revealed a significant effect only for N stage greater than 0 (P = 0.048). PET provides independent prognostic information for OS and DM but not for LRC in patients with locally advanced esophageal carcinoma treated with definitive radiochemotherapy in addition to clinical parameters. Among the investigated uptake-based parameters, only SUR was an independent prognostic factor for OS and DM. These results suggest that the prognostic value of tracer uptake can be improved when characterized by SUR instead of SUV. Further investigations are required to confirm these preliminary results. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Adamska, K; Bellinghausen, R; Voelkel, A
2008-06-27
The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Kato, Dai; Sumimoto, Michinori; Ueda, Akio; Hirono, Shigeru; Niwa, Osamu
2012-12-18
The electrokinetic parameters of all the DNA bases were evaluated using a sputter-deposited nanocarbon film electrode. It is very difficult to evaluate the electrokinetic parameters of DNA bases with conventional electrodes, and particularly those of pyrimidine bases, owing to their high oxidation potentials. Nanocarbon film formed by employing an electron cyclotron resonance sputtering method consists of a nanocrystalline sp(2) and sp(3) mixed bond structure that exhibits a sufficient potential window, very low adsorption of DNA molecules, and sufficient electrochemical activity to oxidize all DNA bases. A precise evaluation of rate constants (k) between all the bases and the electrodes is achieved for the first time by obtaining rotating disc electrode measurements with our nanocarbon film electrode. We found that the k value of each DNA base was dominantly dependent on the surface oxygen-containing group of the nanocarbon film electrode, which was controlled by electrochemical pretreatment. In fact, the treated electrode exhibited optimum k values for all the mononucleotides, namely, 2.0 × 10(-2), 2.5 × 10(-1), 2.6 × 10(-3), and 5.6 × 10(-3) cm s(-1) for GMP, AMP, TMP, and CMP, respectively. The k value of AMP was sufficiently enhanced by up to 33 times with electrochemical pretreatment. We also found the k values for pyrimidine bases to be much lower than those of purine bases although there was no large difference between their diffusion coefficient constants. Moreover, the theoretical oxidation potential values for all the bases coincided with those obtained in electrochemical experiments using our nanocarbon film electrode.
Udhayarasu, Madhanlal; Ramakrishnan, Kalpana; Periasamy, Soundararajan
2017-12-01
Periodical monitoring of renal function, specifically for subjects with history of diabetic or hypertension would prevent them from entering into chronic kidney disease (CKD) condition. The recent increase in numbers may be due to food habits or lack of physical exercise, necessitates a rapid kidney function monitoring system. Presently, it is determined by evaluating glomerular filtration rate (GFR) that is mainly dependent on serum creatinine value and demographic parameters and ethnic value. Attempted here is to develop ethnic parameter based on skin texture for every individual. This value when used in GFR computation, the results are much agreeable with GFR obtained through standard modification of diet in renal disease and CKD epidemiology collaboration equations. Once correlation between CKD and skin texture is established, classification tool using artificial neural network is built to categorise CKD level based on demographic values and parameter obtained through skin texture (without using creatinine). This network when tested gives almost at par results with the network that is trained with demographic and creatinine values. The results of this Letter demonstrate the possibility of non-invasively determining kidney function and hence for making a device that would readily assess the kidney function even at home.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Content dependent selection of image enhancement parameters for mobile displays
NASA Astrophysics Data System (ADS)
Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo
2011-01-01
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.
Lothe, Anjali G; Sinha, Alok
2017-05-01
Leachate pollution index (LPI) is an environmental index which quantifies the pollution potential of leachate generated in landfill site. Calculation of Leachate pollution index (LPI) is based on concentration of 18 parameters present in leachate. However, in case of non-availability of all 18 parameters evaluation of actual values of LPI becomes difficult. In this study, a model has been developed to predict the actual values of LPI in case of partial availability of parameters. This model generates eleven equations that helps in determination of upper and lower limit of LPI. The geometric mean of these two values results in LPI value. Application of this model to three landfill site results in LPI value with an error of ±20% for ∑ i n w i ⩾0.6. Copyright © 2016 Elsevier Ltd. All rights reserved.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Semeraro, Oscar; Agostoni, Pierfrancesco; Verheye, Stefan; Van Langenhove, Glenn; Van den Heuvel, Paul; Convens, Carl; Van den Branden, Frank; Bruining, Nico; Vermeersch, Paul
2009-03-01
Angiographic parameters (such as late luminal loss) are common endpoints in drug-eluting stent trials, but their correlation with the neointimal process and their reliability in predicting restenosis are debated. Using quantitative coronary angiography (QCA) data (49 bare metal stent and 44 sirolimus-eluting stent lesions) and intravascular ultrasound (IVUS) data (39 bare metal stent and 34 sirolimus-eluting stent lesions) from the randomised Reduction of Restenosis In Saphenous vein grafts with Cypher stent (RRISC) trial, we analysed the "relocation phenomenon" of QCA-based in-stent minimal luminal diameter (MLD) between post-procedure and follow-up and we correlated QCA-based and IVUS-based restenotic parameters in stented saphenous vein grafts. We expected the presence of MLD relocation for low late loss values, as MLD can "migrate" along the stent if minimal re-narrowing occurs, while we anticipated follow-up MLD to be located close to post-procedural MLD position for higher late loss. QCA-based MLD relocation occurred frequently: the site of MLD shifted from post-procedure to follow-up an "absolute" distance of 5.8 mm [2.5-10.2] and a "relative" value of 29% [10-46]. MLD relocation failed to correlate with in-stent late loss (rho = 0.14 for "absolute" MLD relocation [p = 0.17], and rho=0.03 for "relative" relocation [p = 0.811). Follow-up QCA-based and IVUS-based MLD values well correlated in the overall population (rho = 0.76, p < 0.001), but QCA underestimated MLD on average 0.55 +/- 0.49 mm, and this was mainly evident for lower MLD values. Conversely, the location of QCA-based MLD failed to correlate with the location of IVUS-based MLD (rho = 0.01 for "absolute" values--in mm [p = 0.911, rho = 0.19 for "relative" values--in % [p = 0.111). Overall, the ability of late loss to "predict" IVUS parameters of restenosis (maximum neointimal hyperplasia diameter, neointimal hyperplasia index and maximum neointimal hyperplasia area) was moderate (rho between 0.46 and 0.54 for the 3 IVUS parameters). These findings suggest the need for a critical re-evaluation of angiographic parameters (such as late loss) as endpoints for drug-eluting stent trials and the use of more precise techniques to describe accurately and properly the restenotic process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Hu, Jin; Zeng, Chunna
2017-02-01
The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Giver, L. P.; Brown, L. R.; Wattson, R. B.; Spencer, M. N.; Chackerian, C., Jr.; Strawa, Anthony W. (Technical Monitor)
1995-01-01
Rotationless band intensities and Herman-Wallis parameters are listed in HITRAN tabulations for several hundred CO2 overtone-combination bands. These parameters are based on laboratory measurements when available, and on DND calculations for the unmeasured bands. The DND calculations for the Fermi interacting nv(sub 1) + v(sub 3) polyads show the a(sub 2) Herman-Wallis parameter varying smoothly from a negative value for the first member of the polyad to a positive value for the final member. Measurements of the v(sub 1) + v(sub 3) dyad are consistent with the DND calculations for the a(sub 2) parameter, as are our recent measurements of the 4v(sub 1) + v(sub 3) pentad. However, the measurement-based values in the HITRAN tables for the 2v(sub 1) + v(sub 3) triad and the 3v(sub 1) + v(sub 3) tetrad do not support the DND calculated values for the a(sub 2) parameters. We therefore decided to make new measurements to improve some of these intensity parameters. With the McMath FTS at Kitt Peak National Observatory/National Solar Observatory we recorded several spectra of the. 4000 to 8000 cm(exp -1) region of pure CO2 at 0.011 cm(exp -1) resolution using the 6 meter White absorption cell. The signal/noise and absorbance of the first and fourth bands of the 3v(sub 1) + v(sub 3) tetrad of C-12O-16 were ideal on these spectra for measuring line intensities and broadening widths. Our selfbroadening results agree with the HITRAN parameterization, while our measurements of the rotationless band intensities are about 15% less than the HITRAN values. We find a negative value of a(sub 2) for the 30011-00001 band and a positive value for the 30014-00001 band, whereas the HITRAN values of a(sub 2) are positive for all four tetrad bands. Our a(sub 1) and a(sub 2) Herman-Wallis parameters are closer to DND calculated values than the 1992 HITRAN values for both the 30011-00001 and the 30014-00001 band.
Absolute Isotopic Abundance Ratios and the Accuracy of Δ47 Measurements
NASA Astrophysics Data System (ADS)
Daeron, M.; Blamart, D.; Peral, M.; Affek, H. P.
2016-12-01
Conversion from raw IRMS data to clumped isotope anomalies in CO2 (Δ47) relies on four external parameters: the (13C/12C) ratio of VPDB, the (17O/16O) and (18O/16O) ratios of VSMOW (or VPDB-CO2), and the slope of the triple oxygen isotope line (λ). Here we investigate the influence that these isotopic parameters exert on measured Δ47 values, using real-world data corresponding to 7 months of measurements; simulations based on randomly generated data; precise comparisons between water-equilibrated CO2 samples and between carbonate standards believed to share quasi-identical Δ47 values; reprocessing of two carbonate calibration data sets with different slopes of Δ47 versus T. Using different sets of isotopic parameters generally produces systematic offsets as large as 0.04 ‰ in final Δ47 values. What's more, even using a single set of isotopic parameters can produce intra- and inter-laboratory discrepancies in final Δ47 values, if some of these parameters are inaccurate. Depending on the isotopic compositions of the standards used for conversion to "absolute" values, these errors should correlate strongly with either δ13C or δ18O, or more weakly with both. Based on measurements of samples expected to display identical Δ47 values, such as 25°C water-equilibrated CO2 with different carbon and oxygen isotope compositions, or high-temperature standards ETH-1 and ETH-2, we conclude that the isotopic parameters used so far in most clumped isotope studies produces large, systematic errors controlled by the relative bulk isotopic compositions of samples and standards, which should be one of the key factors responsible for current inter-laboratory discrepancies. By contrast, the isotopic parameters of Brand et al. [2010] appear to yield accurate Δ47 values regardless of bulk isotopic composition. References:Brand, Assonov and Coplen [2010] http://dx.doi.org/10.1351/PAC-REP-09-01-05
Convergence properties of simple genetic algorithms
NASA Technical Reports Server (NTRS)
Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.
1974-01-01
The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.
Méndez-Cid, Francisco J; Lorenzo, José M; Martínez, Sidonia; Carballo, Javier
2017-02-15
The agreement among the results determined for the main parameters used in the evaluation of the fat auto-oxidation was investigated in animal fats (butter fat, subcutaneous pig back-fat and subcutaneous ham fat). Also, graduated colour scales representing the colour change during storage/ripening were developed for the three types of fat, and the values read in these scales were correlated with the values observed for the different parameters indicating fat oxidation. In general good correlation among the values of the different parameters was observed (e.g. TBA value correlated with the peroxide value: r=0.466 for butter and r=0.898 for back-fat). A reasonable correlation was observed between the values read in the developed colour scales and the values for the other parameters determined (e.g. values of r=0.320 and r=0.793 with peroxide value for butter and back-fat, respectively, and of r=0.767 and r=0.498 with TBA value for back-fat and ham fat, respectively). Copyright © 2016 Elsevier Ltd. All rights reserved.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
User-customized brain computer interfaces using Bayesian optimization
NASA Astrophysics Data System (ADS)
Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali
2016-04-01
Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.
Design values of resilient modulus of stabilized and non-stabilized base.
DOT National Transportation Integrated Search
2010-10-01
The primary objective of this research study is to determine design value ranges for typical base materials, as allowed by LADOTD specifications, through laboratory tests with respect to resilient modulus and other parameters used by pavement design ...
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Beauchet, Olivier; Allali, Gilles; Sekhon, Harmehr; Verghese, Joe; Guilain, Sylvie; Steinmetz, Jean-Paul; Kressig, Reto W.; Barden, John M.; Szturm, Tony; Launay, Cyrille P.; Grenier, Sébastien; Bherer, Louis; Liu-Ambrose, Teresa; Chester, Vicky L.; Callisaya, Michele L.; Srikanth, Velandai; Léonard, Guillaume; De Cock, Anne-Marie; Sawa, Ryuichi; Duque, Gustavo; Camicioli, Richard; Helbostad, Jorunn L.
2017-01-01
Background: Gait disorders, a highly prevalent condition in older adults, are associated with several adverse health consequences. Gait analysis allows qualitative and quantitative assessments of gait that improves the understanding of mechanisms of gait disorders and the choice of interventions. This manuscript aims (1) to give consensus guidance for clinical and spatiotemporal gait analysis based on the recorded footfalls in older adults aged 65 years and over, and (2) to provide reference values for spatiotemporal gait parameters based on the recorded footfalls in healthy older adults free of cognitive impairment and multi-morbidities. Methods: International experts working in a network of two different consortiums (i.e., Biomathics and Canadian Gait Consortium) participated in this initiative. First, they identified items of standardized information following the usual procedure of formulation of consensus findings. Second, they merged databases including spatiotemporal gait assessments with GAITRite® system and clinical information from the “Gait, cOgnitiOn & Decline” (GOOD) initiative and the Generation 100 (Gen 100) study. Only healthy—free of cognitive impairment and multi-morbidities (i.e., ≤ 3 therapeutics taken daily)—participants aged 65 and older were selected. Age, sex, body mass index, mean values, and coefficients of variation (CoV) of gait parameters were used for the analyses. Results: Standardized systematic assessment of three categories of items, which were demographics and clinical information, and gait characteristics (clinical and spatiotemporal gait analysis based on the recorded footfalls), were selected for the proposed guidelines. Two complementary sets of items were distinguished: a minimal data set and a full data set. In addition, a total of 954 participants (mean age 72.8 ± 4.8 years, 45.8% women) were recruited to establish the reference values. Performance of spatiotemporal gait parameters based on the recorded footfalls declined with increasing age (mean values and CoV) and demonstrated sex differences (mean values). Conclusions: Based on an international multicenter collaboration, we propose consensus guidelines for gait assessment and spatiotemporal gait analysis based on the recorded footfalls, and reference values for healthy older adults. PMID:28824393
Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q
2016-07-26
To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.
NASA Astrophysics Data System (ADS)
Ghorbanpour Arani, A.; Zamani, M. H.
2018-06-01
The present work deals with bending behavior of nanocomposite beam resting on two parameters modified Vlasov model foundation (MVMF), with consideration of agglomeration and distribution of carbon nanotubes (CNTs) in beam matrix. Equivalent fiber based on Eshelby-Mori-Tanaka approach is employed to determine influence of CNTs aggregation on elastic properties of CNT-reinforced beam. The governing equations are deduced using the principle of minimum potential energy under assumption of the Euler-Bernoulli beam theory. The MVMF required the estimation of γ parameter; to this purpose, unique iterative technique based on variational principles is utilized to compute value of the γ and subsequently fourth-order differential equation is solved analytically. Eventually, the transverse displacements and bending stresses are obtained and compared for different agglomeration parameters, various boundary conditions simultaneously and variant elastic foundation without requirement to instate values for foundation parameters.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
VizieR Online Data Catalog: PCA-based inversion of stellar parameters (Gebran+, 2016)
NASA Astrophysics Data System (ADS)
Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.
2016-03-01
Inverted effective temperatures, surface gravities, projected rotational velocities, metalicities, and radial velocities for the selected A stars. The "closest" are the values found in Vizier catalogues closest to our inverted parameters, while "median" are the median of the catalogue values. Outliers are marked as "1" in the "outliers" column (see sect. 6) (1 data file).
Applications of Monte Carlo method to nonlinear regression of rheological data
NASA Astrophysics Data System (ADS)
Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo
2018-02-01
In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.
Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert
2018-05-03
Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.
Homeostatic enhancement of active mechanotransduction
NASA Astrophysics Data System (ADS)
Milewski, Andrew; O'Maoiléidigh, Dáibhid; Hudspeth, A. J.
2018-05-01
Our sense of hearing boasts exquisite sensitivity to periodic signals. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. As a result, small changes in these values could compromise the ability of the mechanosensory hair cells to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system employs a homeostatic mechanism that ensures the robustness of its operation to variation in parameter values. Through analytical techniques and computer simulations we investigate whether a homeostatic mechanism renders the hair bundle's signal-detection ability more robust to alterations in experimentally accessible parameters. When homeostasis is enforced, the range of values for which the bundle's sensitivity exceeds a threshold can increase by more than an order of magnitude. The robustness of cochlear function based on somatic motility or hair bundle motility may be achieved by employing the approach we describe here.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
NASA Astrophysics Data System (ADS)
Vugmeyster, Liliya; Ostrovsky, Dmitry; Fu, Riqiang
2015-10-01
In this work, we assess the usefulness of static 15N NMR techniques for the determination of the 15N chemical shift anisotropy (CSA) tensor parameters and 15N-1H dipolar splittings in powder protein samples. By using five single labeled samples of the villin headpiece subdomain protein in a hydrated lyophilized powder state, we determine the backbone 15N CSA tensors at two temperatures, 22 and -35 °C, in order to get a snapshot of the variability across the residues and as a function of temperature. All sites probed belonged to the hydrophobic core and most of them were part of α-helical regions. The values of the anisotropy (which include the effect of the dynamics) varied between 130 and 156 ppm at 22 °C, while the values of the asymmetry were in the 0.32-0.082 range. The Leu-75 and Leu-61 backbone sites exhibited high mobility based on the values of their temperature-dependent anisotropy parameters. Under the assumption that most differences stem from dynamics, we obtained the values of the motional order parameters for the 15N backbone sites. While a simple one-dimensional line shape experiment was used for the determination of the 15N CSA parameters, a more advanced approach based on the ;magic sandwich; SAMMY pulse sequence (Nevzorov and Opella, 2003) was employed for the determination of the 15N-1H dipolar patterns, which yielded estimates of the dipolar couplings. Accordingly, the motional order parameters for the dipolar interaction were obtained. It was found that the order parameters from the CSA and dipolar measurements are highly correlated, validating that the variability between the residues is governed by the differences in dynamics. The values of the parameters obtained in this work can serve as reference values for developing more advanced magic-angle spinning recoupling techniques for multiple labeled samples.
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
2007-03-01
column experiments were used to obtain model parameters . Cost data used in the model were based on conventional GAC installations, as modified to...43 Calculation of Parameters ...66 Determination of Parameter Values
Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].
DOT National Transportation Integrated Search
2013-10-01
Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Nyflot, M; Bowen, S
2014-06-15
Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4Dmore » PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.« less
NASA Astrophysics Data System (ADS)
Han, Xiao; Gao, Xiguang; Song, Yingdong
2017-10-01
An approach to identify parameters of interface friction model for Ceramic Matrix composites based on stress-strain response was developed. The stress distribution of fibers in the interface slip region and intact region of the damaged composite was determined by adopting the interface friction model. The relation between maximum strain, secant moduli of hysteresis loop and interface shear stress, interface de-bonding stress was established respectively with the method of symbolic-graphic combination. By comparing the experimental strain, secant moduli of hysteresis loop with computation values, the interface shear stress and interface de-bonding stress corresponding to first cycle were identified. Substituting the identification of parameters into interface friction model, the stress-strain curves were predicted and the predicted results fit experiments well. Besides, the influence of number of data points on identifying the value of interface parameters was discussed. And the approach was compared with the method based on the area of hysteresis loop.
NASA Astrophysics Data System (ADS)
Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao
2017-05-01
Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.
Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier
2015-05-01
Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Norton, P. A., II; Haj, A. E., Jr.
2014-12-01
The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.
Value as a parameter to consider in operational strategies for CSP plants
NASA Astrophysics Data System (ADS)
de Meyer, Oelof; Dinter, Frank; Govender, Saneshan
2017-06-01
This paper introduced a value parameter to consider when analyzing operational strategies for CSP plants. The electric system in South Africa, used as case study, is severely constrained with an influx of renewables in the early phase of deployment. The energy demand curve for the system is analyzed showing the total wind and solar photovoltaic contributions for winter and summer. Due to the intermittent nature and meteorological operating conditions of wind and solar photovoltaic plants, the value of CSP plants within the electric system is introduced. Analyzing CSP plants based on the value parameter alone will remain only a philosophical view. Currently there is no quantifiable measure to translate the philosophical view or subjective value and it solely remains the position of the stakeholder. By introducing three other parameters, Cost, Plant and System to a holistic representation of the Operating Strategies of generation plants, the Value parameter can be translated into a quantifiable measure. Utilizing the country's current procurement program as case study, CSP operating under the various PPA within the Bid Windows are analyzed. The Value Cost Plant System diagram developed is used to quantify the value parameter. This paper concluded that no value is obtained from CSP plants operating under the Bid Window 1 & 2 Power Purchase Agreement. However, by recognizing the dispatchability potential of CSP plants in Bid Window 3 & 3.5, the value of CSP in the electric system can be quantified utilizing Value Added Relationship VCPS-diagram. Similarly ancillary services to the system were analyzed. One of the relationships that have not yet been explored within the industry is an interdependent relationship. It was emphasized that the cost and value structure is shared between the plant and system. Although this relationship is functional when the plant and system belongs to the same entity, additional value is achieved by marginalizing the cost structure. A tradeoff between the plant performance indicators and system operations are achieved. CSP plants have demonstrated its capabilities by adapting to various operating strategies. With adequate storage capabilities and appropriate system boundary conditions in place, CSP plants offer solutions as base-load generation plants, peaking plants, intermittent generation and ancillary services to the system. Depending on the electric system structure, the value obtained from CSP plants are quantifiable under the right boundary conditions. An interdependent relationship between the plant and system attains the most value in operating strategies for CSP.
NASA Astrophysics Data System (ADS)
Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav
2004-08-01
Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
Distributed activation energy model parameters of some Turkish coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunes, M.; Gunes, S.K.
2008-07-01
A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Density functional calculations of the Mössbauer parameters in hexagonal ferrite SrFe12O19
NASA Astrophysics Data System (ADS)
Ikeno, Hidekazu
2018-03-01
Mössbauer parameters in a magnetoplumbite-type hexagonal ferrite, SrFe12O19, are computed using the all-electron band structure calculation based on the density functional theory. The theoretical isomer shift and quadrupole splitting are consistent with experimentally obtained values. The absolute values of hyperfine splitting parameters are found to be underestimated, but the relative scale can be reproduced. The present results validate the site-dependence of Mössbauer parameters obtained by analyzing experimental spectra of hexagonal ferrites. The results also show the usefulness of theoretical calculations for increasing the reliability of interpretation of the Mössbauer spectra.
The Easy Way of Finding Parameters in IBM (EWofFP-IBM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turkan, Nureddin
E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Karabagias, Ioannis K; Louppis, Artemis P; Karabournioti, Sofia; Kontakos, Stavros; Papastephanou, Chara; Kontominas, Michael G
2017-02-15
The objective of the present study was: i) to characterize Mediterranean citrus honeys based on conventional physicochemical parameter values, volatile compounds, and mineral content ii) to investigate the potential of above parameters to differentiate citrus honeys according to geographical origin using chemometrics. Thus, 37 citrus honey samples were collected during harvesting periods 2013 and 2014 from Greece, Egypt, Morocco, and Spain. Conventional physicochemical and CIELAB colour parameters were determined using official methods of analysis and the Commission Internationale de l' Eclairage recommendations, respectively. Minerals were determined using ICP-OES and volatiles using SPME-GC/MS. Results showed that honey samples analyzed, met the standard quality criteria set by the EU and were successfully classified according to geographical origin. Correct classification rates were 97.3% using 8 physicochemical parameter values, 86.5% using 15 volatile compound data and 83.8% using 13 minerals. Copyright © 2016 Elsevier Ltd. All rights reserved.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Ay, Murat
2014-05-01
Low, medium and high values of a parameter are very important issues in climatological, meteorological and hydrological events. Moreover these values are used to decide various design parameters based on scientific aspects and real applications everywhere in the world. With this concept, a new trend method recently proposed by Şen was used for water parameters, pH, T, EC, Na+, K+, CO3-2, HCO3-, Cl-, SO4-2, B+3 and Q recorded at five different stations (station numbers and locations: 1535-Sogutluhan (Sivas), 1501-Yamula (Kayseri), 1546-Tuzkoy (Kayseri), 1503-Yahsihan (Kirsehir), and 1533-Inozu (Samsun)) selected from the Kizilirmak River in Turkey. Low, medium and high values of the parameters were graphically evaluated with this method. For comparison purposes, the Mann-Kendall trend test was also applied to the same data. Differences of the two trend tests were also emphasised. It was found that the Şen trend test compared with the MK trend test had several advantages. The results also revealed that the Şen trend test could be successfully used for trend analysis of water parameters especially in terms of evaluation of low, medium and high values of data.
NASA Astrophysics Data System (ADS)
Voitsekhovskii, A. V.; Nesmelov, S. N.; Dzyadukh, S. M.; Varavin, V. S.; Vasil'ev, V. V.; Dvoretskii, S. A.; Mikhailov, N. N.; Yakushev, M. V.; Sidorov, G. Yu.
2017-06-01
In a temperature range of 9-200 K, temperature dependences of the differential resistance of space-charge region in the strong inversion mode are experimentally studied for MIS structures based on CdxHg1-xTe (x = 0.22-0.40) grown by molecular-beam epitaxy. The effect of various parameters of structures: the working layer composition, the type of a substrate, the type of insulator coating, and the presence of a near-surface graded-gap layer on the value of the product of differential resistance by the area is studied. It is shown that the values of the product RSCRA for MIS structures based on n-CdHgTe grown on a Si(013) substrate are smaller than those for structures based on the material grown on a GaAs(013) substrate. The values of the product RSCRA for MIS structures based on p-CdHgTe grown on a Si(013) substrate are comparable with the value of the analogous parameter for MIS structures based on p-CdHgTe grown on a GaAs(013) substrate.
Critical levels as applied to ozone for North American forests
Robert C. Musselman
2006-01-01
The United States and Canada have used concentration-based parameters for air quality standards for ozone effects on forests in North America. The European critical levels method for air quality standards uses an exposure-based parameter, a cumulative ozone concentration index with a threshold cutoff value. The critical levels method has not been used in North America...
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Sadeghi-Naini, Ali; Vorauer, Eric; Chin, Lee; Falou, Omar; Tran, William T; Wright, Frances C; Gandhi, Sonal; Yaffe, Martin J; Czarnota, Gregory J
2015-11-01
Changes in textural characteristics of diffuse optical spectroscopic (DOS) functional images, accompanied by alterations in their mean values, are demonstrated here for the first time as early surrogates of ultimate treatment response in locally advanced breast cancer (LABC) patients receiving neoadjuvant chemotherapy (NAC). NAC, as a standard component of treatment for LABC patient, induces measurable heterogeneous changes in tumor metabolism which were evaluated using DOS-based metabolic maps. This study characterizes such inhomogeneous nature of response development, by determining alterations in textural properties of DOS images apparent at early stages of therapy, followed later by gross changes in mean values of these functional metabolic maps. Twelve LABC patients undergoing NAC were scanned before and at four times after treatment initiation, and tomographic DOS images were reconstructed at each time. Ultimate responses of patients were determined clinically and pathologically, based on a reduction in tumor size and assessment of residual tumor cellularity. The mean-value parameters and textural features were extracted from volumetric DOS images for several functional and metabolic parameters prior to the treatment initiation. Changes in these DOS-based biomarkers were also monitored over the course of treatment. The measured biomarkers were applied to differentiate patient responses noninvasively and compared to clinical and pathologic responses. Responding and nonresponding patients demonstrated different changes in DOS-based textural and mean-value parameters during chemotherapy. Whereas none of the biomarkers measured prior the start of therapy demonstrated a significant difference between the two patient populations, statistically significant differences were observed at week one after treatment initiation using the relative change in contrast/homogeneity of seven functional maps (0.001
Flight data processing with the F-8 adaptive algorithm
NASA Technical Reports Server (NTRS)
Hartmann, G.; Stein, G.; Petersen, K.
1977-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters
NASA Astrophysics Data System (ADS)
Huang, Lin-Shan; Chen, Yan-Guang
Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.
Metamodel-based inverse method for parameter identification: elastic-plastic damage model
NASA Astrophysics Data System (ADS)
Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb
2017-04-01
This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.
NASA Astrophysics Data System (ADS)
Chakravarty, S. C.; Nagaraja, Kamsali; Jakowski, N.
2017-03-01
The annual variations of ionospheric Total Electron Content (TEC), F-region peak ionisation (NmF2) and the ionospheric slab thickness (τ) over the Indian region during the low solar activity period of May 2007-April 2008 have been studied. For this purpose the ground based TEC data obtained from GAGAN measurements and the space based data from GPS radio occultation technique using CHAMP have been utilised. The results of these independent measurements are combined to derive additional parameters such as the equivalent slab thickness of the total and the bottom-side ionospheric regions (τT and τB). The one year hourly average values of all these parameters over the ionospheric anomaly latitude region (10-26°N) are presented here along with the statistical error estimates. It is expected that these results are potentially suited to be used as base level values during geomagnetically quiet and undisturbed solar conditions.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-05-01
AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-01-01
Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188
Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc
2011-08-01
To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
System and method for regulating resonant inverters
Stevanovic, Ljubisa Dragoljub [Clifton Park, NY; Zane, Regan Andrew [Superior, CO
2007-08-28
A technique is provided for direct digital phase control of resonant inverters based on sensing of one or more parameters of the resonant inverter. The resonant inverter control system includes a switching circuit for applying power signals to the resonant inverter and a sensor for sensing one or more parameters of the resonant inverter. The one or more parameters are representative of a phase angle. The resonant inverter control system also includes a comparator for comparing the one or more parameters to a reference value and a digital controller for determining timing of the one or more parameters and for regulating operation of the switching circuit based upon the timing of the one or more parameters.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
Xing, W. W.; Triantafyllidis, V.
2017-01-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327
Modification of surface morphology of Ti6Al4V alloy manufactured by Laser Sintering
NASA Astrophysics Data System (ADS)
Draganovská, Dagmar; Ižariková, Gabriela; Guzanová, Anna; Brezinová, Janette; Koncz, Juraj
2016-06-01
The paper deals with the evaluation of relation between roughness parameters of Ti6Al4V alloy produced by DMLS and modified by abrasive blasting. There were two types of blasting abrasives that were used - white corundum and Zirblast at three levels of air pressure. The effect of pressure on the value of individual roughness parameters and an influence of blasting media on the parameters for samples blasted by white corundum and Zirblast were evaluated by ANOVA. Based on the measured values, the correlation matrix was set and the standard of correlation statistic importance between the monitored parameters was determined from it. The correlation coefficient was also set.
An expert system for prediction of aquatic toxicity of contaminants
Hickey, James P.; Aldridge, Andrew J.; Passino, Dora R. May; Frank, Anthony M.; Hushon, Judith M.
1990-01-01
The National Fisheries Research Center-Great Lakes has developed an interactive computer program in muLISP that runs on an IBM-compatible microcomputer and uses a linear solvation energy relationship (LSER) to predict acute toxicity to four representative aquatic species from the detailed structure of an organic molecule. Using the SMILES formalism for a chemical structure, the expert system identifies all structural components and uses a knowledge base of rules based on an LSER to generate four structure-related parameter values. A separate module then relates these values to toxicity. The system is designed for rapid screening of potential chemical hazards before laboratory or field investigations are conducted and can be operated by users with little toxicological background. This is the first expert system based on LSER, relying on the first comprehensive compilation of rules and values for the estimation of LSER parameters.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
NASA Astrophysics Data System (ADS)
Xiao, Shou-Ne; Wang, Ming-Meng; Hu, Guang-Zhong; Yang, Guang-Wu
2017-09-01
In view of the problem that it's difficult to accurately grasp the influence range and transmission path of the vehicle top design requirements on the underlying design parameters. Applying directed-weighted complex network to product parameter model is an important method that can clarify the relationships between product parameters and establish the top-down design of a product. The relationships of the product parameters of each node are calculated via a simple path searching algorithm, and the main design parameters are extracted by analysis and comparison. A uniform definition of the index formula for out-in degree can be provided based on the analysis of out-in-degree width and depth and control strength of train carriage body parameters. Vehicle gauge, axle load, crosswind and other parameters with higher values of the out-degree index are the most important boundary conditions; the most considerable performance indices are the parameters that have higher values of the out-in-degree index including torsional stiffness, maximum testing speed, service life of the vehicle, and so on; the main design parameters contain train carriage body weight, train weight per extended metre, train height and other parameters with higher values of the in-degree index. The network not only provides theoretical guidance for exploring the relationship of design parameters, but also further enriches the application of forward design method to high-speed trains.
Determination of Stark parameters by cross-calibration in a multi-element laser-induced plasma
NASA Astrophysics Data System (ADS)
Liu, Hao; Truscott, Benjamin S.; Ashfold, Michael N. R.
2016-05-01
We illustrate a Stark broadening analysis of the electron density Ne and temperature Te in a laser-induced plasma (LIP), using a model free of assumptions regarding local thermodynamic equilibrium (LTE). The method relies on Stark parameters determined also without assuming LTE, which are often unknown and unavailable in the literature. Here, we demonstrate that the necessary values can be obtained in situ by cross-calibration between the spectral lines of different charge states, and even different elements, given determinations of Ne and Te based on appropriate parameters for at least one observed transition. This approach enables essentially free choice between species on which to base the analysis, extending the range over which these properties can be measured and giving improved access to low-density plasmas out of LTE. Because of the availability of suitable tabulated values for several charge states of both Si and C, the example of a SiC LIP is taken to illustrate the consistency and accuracy of the procedure. The cross-calibrated Stark parameters are at least as reliable as values obtained by other means, offering a straightforward route to extending the literature in this area.
Bending of an Infinite beam on a base with two parameters in the absence of a part of the base
NASA Astrophysics Data System (ADS)
Aleksandrovskiy, Maxim; Zaharova, Lidiya
2018-03-01
Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.
Lung function parameters of healthy Sri Lankan Tamil young adults.
Balasubramaniam, M; Sivapalan, K; Thuvarathipan, R
2014-06-01
To establish reference norms of lung function parameters for healthy Sri Lankan Tamil young adults. Cross sectional study of Tamil students at the Faculty of Medicine, Jaffna. Healthy non smoking students of Sri Lankan Tamil ethnic group were enrolled. Age, height, weight, BMI and spirometric measurements (Micro Quark) were recorded in 267 participants (137 females and 130 males). Height was significantly correlated with (p<0.05) all the lung function parameters except FEV1%, PEFR and MEF75 in males. Prediction equations were derived by regression analysis based on the height as an independent variable. Predicted lung function values for a particular age and height were lower than values predicted for Pakistanis, Kelatanese Malaysians and eastern Indians. The values were comparable to south Indians in Madras. Our FVC values of males and VC of females were closer to Sri Lankan Sinhalese. FEV1 and FEF25-75 in males were slightly higher and FVC, FEV1 and FEF25-75 in females were slightly lower in Tamils. When mean values were compared, these parameters were significantly higher in Tamil males (p<0.001) and significantly lower in Tamil females (p<0.001). These values will be useful in interpreting lung function parameters of the particular age group as there are no published norms for Sri Lankan Tamils. However, our study sample was confined to medical students of 20-28 years which may explain the differences with Sinhalese.
Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution
NASA Astrophysics Data System (ADS)
Kazei, Vladimir; Alkhalifah, Tariq
2018-05-01
Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.
NASA Technical Reports Server (NTRS)
Barrett, Charles A.
1992-01-01
A large body of high temperature cyclic oxidation data generated from tests at NASA Lewis Research Center involving gravimetric/time values for 36 Ni- and Co-base superalloys was reduced to a single attack parameter, K(sub a), for each run. This K(sub a) value was used to rank the cyclic oxidation resistance of each alloy at 1000, 1100, and 1150 C. These K(sub a) values were also used to derive an estimating equation using multiple linear regression involving log(sub 10)K(sub a) as a function of alloy chemistry and test temperature. This estimating equation has a high degree of fit and could be used to predict cyclic oxidation behavior for similar alloys and to design an optimum high strength Ni-base superalloy with maximum high temperature cyclic oxidation resistance. The critical alloy elements found to be beneficial were Al, Cr, and Ta.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.
1977-03-01
The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less
Yakimov, Eugene B
2016-06-01
An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.
2016-02-01
In addition , the parser updates some parameters based on uncertainties. For example, Analytica was very slow to update Pk values based on...moderate range. The additional security environments helped to fill gaps in lower severity. Weapons Effectiveness Pk values were modified to account for two...project is to help improve the value and character of defense resource planning in an era of growing uncertainty and complex strategic challenges
Microscopic study of spin cut-off factors of nuclear level densities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholami, M.; Kildir, M.; Behkami, A. N.
Level densities and spin cut-off factors have been investigated within the microscopic approach based on the BCS Hamiltonian. In particular, the spin cut-off parameters have been calculated at neutron binding energies over a large range of nuclear mass using the BCS theory. The spin cut-off parameters {sigma}{sup 2}(E) have also been obtained from the Gilbert and Cameron expression and from rigid body calculations. The results were compared with their corresponding macroscopic values. It was found that the values of {sigma}{sup 2}(E) did not increase smoothly with A as expected based on macroscopic theory. Instead, the values of {sigma}{sup 2}(E) showmore » structure reflecting the angular momentum of the shell model orbitals near the Fermi energy.« less
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Hematological and serum chemistry norms for sandhill and whooping cranes
Olsen, Glenn H.; Hendricks, M.M.; Dressler, L.E.
2001-01-01
The normal values used as a diagnostic tool and for comparison of cranes were established in the early 1970's. In that early study, no effort was made to look at factors such as age, sex, or subspecies. In addition, during the early study disease problems (primarily disseminated visceral coccidiosis) and nutritional problems were undiagnosed and uncontrolled. For 2 years during the annual health examinations of cranes at the USGS Patuxent Wildlife Research Center (Patuxent), we collected blood from healthy cranes for analysis. We found significant differences between the values reported from the 1970's and the values seen in this study for 8 blood parameters for Florida sandhill cranes (Grus canadensis pratensis), 6 blood parameters for greater sandhill cranes (G. c. tabida), and 6 blood parameters for whooping cranes (Grus americana). In addition, there were significant differences for some hematology and serum chemistry values based on the age of the cranes.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadeghi-Naini, Ali; Falou, Omar; Czarnota, Gregory J., E-mail: Gregory.Czarnota@sunnybrook.ca
2015-11-15
Purpose: Changes in textural characteristics of diffuse optical spectroscopic (DOS) functional images, accompanied by alterations in their mean values, are demonstrated here for the first time as early surrogates of ultimate treatment response in locally advanced breast cancer (LABC) patients receiving neoadjuvant chemotherapy (NAC). NAC, as a standard component of treatment for LABC patient, induces measurable heterogeneous changes in tumor metabolism which were evaluated using DOS-based metabolic maps. This study characterizes such inhomogeneous nature of response development, by determining alterations in textural properties of DOS images apparent at early stages of therapy, followed later by gross changes in mean valuesmore » of these functional metabolic maps. Methods: Twelve LABC patients undergoing NAC were scanned before and at four times after treatment initiation, and tomographic DOS images were reconstructed at each time. Ultimate responses of patients were determined clinically and pathologically, based on a reduction in tumor size and assessment of residual tumor cellularity. The mean-value parameters and textural features were extracted from volumetric DOS images for several functional and metabolic parameters prior to the treatment initiation. Changes in these DOS-based biomarkers were also monitored over the course of treatment. The measured biomarkers were applied to differentiate patient responses noninvasively and compared to clinical and pathologic responses. Results: Responding and nonresponding patients demonstrated different changes in DOS-based textural and mean-value parameters during chemotherapy. Whereas none of the biomarkers measured prior the start of therapy demonstrated a significant difference between the two patient populations, statistically significant differences were observed at week one after treatment initiation using the relative change in contrast/homogeneity of seven functional maps (0.001 < p < 0.049), and mean value of water content in tissue (p = 0.010). The cross-validated sensitivity and specificity of these parameters at week one of therapy ranged between 80%–100% and 67%–100%, respectively. Higher levels of statistically significant differences were exhibited at week four after start of treatment, with cross-validated sensitivities and specificities ranging between 80% and 100% for three textural and three mean-value parameters. The combination of the textural and mean-value parameters in a “hybrid” profile could better separate the two patient populations early on during a course of treatment, with cross-validated sensitivities and specificities of up to 100% (p = 0.001). Conclusions: The results of this study suggest that alterations in textural characteristics of DOS images, in conjunction with changes in their mean values, can classify noninvasively the ultimate clinical and pathologic response of LABC patients to chemotherapy, as early as one week after start of their treatment. This provides a basis for using DOS imaging as a tool for therapy personalization.« less
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
Value-Based Caching in Information-Centric Wireless Body Area Networks
Al-Turjman, Fadi M.; Imran, Muhammad; Vasilakos, Athanasios V.
2017-01-01
We propose a resilient cache replacement approach based on a Value of sensed Information (VoI) policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation) and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs). These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures). These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures. PMID:28106817
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
NASA Astrophysics Data System (ADS)
Kavimani, V.; Prakash, K. Soorya
2017-11-01
This paper deals with the fabrication of reduced graphene oxide (r-GO) reinforced Magnesium Metal Matrix Composite (MMC) through a novel solvent based powder metallurgy route. Investigations over basic and functional properties of developed MMC reveals that addition of r-GO improvises the microhardness upto 64 HV but however decrement in specific wear rate is also notified. Visualization of worn out surfaces through SEM images clearly explains for the occurrence of plastic deformation and the presence of wear debris because of ploughing out action. Taguchi coupled Artificial Neural Network (ANN) technique is adopted to arrive at optimal values of the input parameters such as load, reinforcement weight percentage, sliding distance and sliding velocity and thereby achieve minimal target output value viz. specific wear rate. Influence of any of the input parameter over specific wear rate studied through ANOVA reveals that load acting on pin has a major influence with 38.85% followed by r-GO wt. % of 25.82%. ANN model developed to predict specific wear rate value based on the variation of input parameter facilitates better predictability with R-value of 98.4% when compared with the outcomes of regression model.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
An extended harmonic balance method based on incremental nonlinear control parameters
NASA Astrophysics Data System (ADS)
Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.
2017-02-01
A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.
NASA Astrophysics Data System (ADS)
Thampi, S. V.; Ravindran, S.; Pant, T. K.; Devasia, C. V.; Sridharan, R.
2008-06-01
In an earlier study, Thampi et al. (2006) have shown that the strength and asymmetry of Equatorial Ionization Anomaly (EIA), obtained well ahead of the onset time of Equatorial Spread F (ESF) have a definite role on the subsequent ESF activity, and a new "forecast parameter" has been identified for the prediction of ESF. This paper presents the observations of EIA strength and asymmetry from the Indian longitudes during the period from August 2005 March 2007. These observations are made using the line of sight Total Electron Content (TEC) measured by a ground-based beacon receiver located at Trivandrum (8.5° N, 77° E, 0.5° N dip lat) in India. It is seen that the seasonal variability of EIA strength and asymmetry are manifested in the latitudinal gradients obtained using the relative TEC measurements. As a consequence, the "forecast parameter" also displays a definite seasonal pattern. The seasonal variability of the EIA strength and asymmetry, and the "forecast parameter" are discussed in the present paper and a critical value for has been identified for each month/season. The likely "skill factor" of the new parameter is assessed using the data for a total of 122 days, and it is seen that when the estimated value of the "forecast parameter" exceeds the critical value, the ESF is seen to occur on more than 95% of cases.
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.
Dhandapani, N V; Thangarasu, V S; Sureshkannan, G
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang
An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less
Investigation on Effect of Material Hardness in High Speed CNC End Milling Process
Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.
2015-01-01
This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267
Method for Calculating the Optical Diffuse Reflection Coefficient for the Ocular Fundus
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2016-07-01
We have developed a method for calculating the optical diffuse reflection coefficient for the ocular fundus, taking into account multiple scattering of light in its layers (retina, epithelium, choroid) and multiple refl ection of light between layers. The method is based on the formulas for optical "combination" of the layers of the medium, in which the optical parameters of the layers (absorption and scattering coefficients) are replaced by some effective values, different for cases of directional and diffuse illumination of the layer. Coefficients relating the effective optical parameters of the layers and the actual values were established based on the results of a Monte Carlo numerical simulation of radiation transport in the medium. We estimate the uncertainties in retrieval of the structural and morphological parameters for the fundus from its diffuse reflectance spectrum using our method. We show that the simulated spectra correspond to the experimental data and that the estimates of the fundus parameters obtained as a result of solving the inverse problem are reasonable.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Evaluation of weather-based rice yield models in India.
Sudharsan, D; Adinarayana, J; Reddy, D Raji; Sreenivas, G; Ninomiya, S; Hirafuji, M; Kiura, T; Tanaka, K; Desai, U B; Merchant, S N
2013-01-01
The objective of this study was to compare two different rice simulation models--standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])--with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.
A New Goodness-of-Fit Test for the Weibull Distribution Based on Spacings
1993-03-01
Values for Z* test statistic: Samplesize N, shape parameter 1.0, a levels are 0.20 thru 0.01 ........................... .. 24 3. Skewness of the...parameter K=0.5, a levels are 0.20 thru 0.01 ....... ............................ 30 5. Power of the Test: Samplesize N=20, shape parameter K=1.0, a ...parameter 1.0, alpha level 0.01 ...... ... 36 12. Power of the Test: Samplesize N=30, shape parameter K=1.5, a levels are 0.20 thru 0.01
A simple strategy for varying the restart parameter in GMRES(m)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Kolev, T V
2007-10-02
When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.
2018-03-01
The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltiadis Alamaniotis; Vivek Agarwal
This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less
Eikendal, Anouk L M; Bots, Michiel L; Haaring, Cees; Saam, Tobias; van der Geest, Rob J; Westenberg, Jos J M; den Ruijter, Hester M; Hoefer, Imo E; Leiner, Tim
2016-01-01
Reference values for morphological and functional parameters of the cardiovascular system in early life are relevant since they may help to identify young adults who fall outside the physiological range of arterial and cardiac ageing. This study provides age and sex specific reference values for aortic wall characteristics, cardiac function parameters and aortic pulse wave velocity (PWV) in a population-based sample of healthy, young adults using magnetic resonance (MR) imaging. In 131 randomly selected healthy, young adults aged between 25 and 35 years (mean age 31.8 years, 63 men) of the general-population based Atherosclerosis-Monitoring-and-Biomarker-measurements-In-The-YOuNg (AMBITYON) study, descending thoracic aortic dimensions and wall thickness, thoracic aortic PWV and cardiac function parameters were measured using a 3.0T MR-system. Age and sex specific reference values were generated using dedicated software. Differences in reference values between two age groups (25-30 and 30-35 years) and both sexes were tested. Aortic diameters and areas were higher in the older age group (all p<0.007). Moreover, aortic dimensions, left ventricular mass, left and right ventricular volumes and cardiac output were lower in women than in men (all p<0.001). For mean and maximum aortic wall thickness, left and right ejection fraction and aortic PWV we did not observe a significant age or sex effect. This study provides age and sex specific reference values for cardiovascular MR parameters in healthy, young Caucasian adults. These may aid in MR guided pre-clinical identification of young adults who fall outside the physiological range of arterial and cardiac ageing.
Corridor of existence of thermodynamically consistent solution of the Ornstein-Zernike equation.
Vorob'ev, V S; Martynov, G A
2007-07-14
We obtain the exact equation for a correction to the Ornstein-Zernike (OZ) equation based on the assumption of the uniqueness of thermodynamical functions. We show that this equation is reduced to a differential equation with one arbitrary parameter for the hard sphere model. The compressibility factor within narrow limits of this parameter variation can either coincide with one of the formulas obtained on the basis of analytical solutions of the OZ equation or assume all intermediate values lying in a corridor between these solutions. In particular, we find the value of this parameter when the thermodynamically consistent compressibility factor corresponds to the Carnahan-Stirling formula.
Measurements of Cuspal Slope Inclination Angles in Palaeoanthropological Applications
NASA Astrophysics Data System (ADS)
Gaboutchian, A. V.; Knyaz, V. A.; Leybova, N. A.
2017-05-01
Tooth crown morphological features, studied in palaeoanthropology, provide valuable information about human evolution and development of civilization. Tooth crown morphology represents biological and historical data of high taxonomical value as it characterizes genetically conditioned tooth relief features averse to substantial changes under environmental factors during lifetime. Palaeoanthropological studies are still based mainly on descriptive techniques and manual measurements of limited number of morphological parameters. Feature evaluation and measurement result analysis are expert-based. Development of new methods and techniques in 3D imaging creates a background provides for better value of palaeoanthropological data processing, analysis and distribution. The goals of the presented research are to propose new features for automated odontometry and to explore their applicability to paleoanthropological studies. A technique for automated measuring of given morphological tooth parameters needed for anthropological study is developed. It is based on using original photogrammetric system as a teeth 3D models acquisition device and on a set of algorithms for given tooth parameters estimation.
Using HEC-HMS: Application to Karkheh river basin
USDA-ARS?s Scientific Manuscript database
This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
Husakova, T; Pavlata, L; Pechova, A; Hauptmanova, K; Pitropovska, E; Tichy, L
2014-09-01
The aim of this study was to establish reference interval for biochemical parameters in blood of alpacas on the basis of large population of clinically healthy animals, and to determine the influence of sex, age and season on nitrogen and lipid metabolites, enzymes, electrolytes, vitamins and minerals in blood of alpacas. Blood samples were collected from 311 alpacas (61 males and 201 females >6 months of age and 49 crias (21 males and 28 females) ⩽6 months of age). Selected farms were located in Central Europe (Czech Republic and Germany). We determined 24 biochemical parameters from blood serum. We performed the comparison of results by the sex of animals and for the older group also the comparison of the results with regard to the season, respectively, to the feeding period. We found no highly significant difference (P<0.01) between males and females with the exception of γ-glutamyl transferase (GGT), alkaline phosphatase (ALP) and cholesterol. We found 15 significantly different parameters between the group of crias 6 months of age and the older alpacas. Based on our findings we suggest for most parameters to use different reference intervals (especially ALP, cholesterol, total protein, globulin, non-esterified fatty acids (NEFA), GGT and phosphorus) for the two above-mentioned age groups. Another important finding is the differences between some parameters in older group of alpacas in summer/winter feeding period. Animals in the summer feeding period have higher values of parameters related to fat mobilization (β-hydroxybutyrate, NEFA) and liver metabolism (bilirubin, alanine aminotransferase). The winter period with increased feeding of supplements with higher amount of fat, vitamins and minerals is characteristic by increased values of cholesterol, triglycerides, vitamins A and E, and some minerals (K, Ca, Mg and Cl) in blood serum. Clinical laboratory diagnosis of metabolic disturbances may be improved with use of age-based reference values and with consideration of seasonal differences.
Tixier, Florent; Hatt, Mathieu; Le Rest, Catherine Cheze; Le Pogam, Adrien; Corcos, Laurent; Visvikis, Dimitris
2012-01-01
18F-FDG PET measurement of standardized uptake values (SUV) is increasingly used for monitoring therapy response or predicting outcome. Alternative parameters computed through textural analysis were recently proposed to quantify the tumor tracer uptake heterogeneity as significant predictors of response. The primary objective of this study was the evaluation of the reproducibility of these heterogeneity measurements. Methods Double-baseline 18F-FDG PET scans of 16 patients acquired within a period of 4 days prior to any treatment were considered. A Bland-Altman analysis was carried out on six parameters based on histogram measurements and 17 heterogeneity parameters based on textural features obtained after discretization with values between 8 and 128. Results SUVmax and SUVmean reproducibility were similar to previously reported studies with a mean percentage difference of 4.7±19.5% and 5.5±21.2% respectively. By comparison better reproducibility was measured for some of the textural features describing tumor tracer local heterogeneity, such as entropy and homogeneity with a mean percentage difference of −2±5.4% and 1.8±11.5% respectively. Several of the tumor regional heterogeneity parameters such as the variability in the intensity and size of homogeneous tumor activity distribution regions had similar reproducibility to the SUV measurements with 95% confidence intervals of −22.5% to 3.1% and −1.1% to 23.5% respectively. These parameters were largely insensitive to the discretization range values. Conclusion Several of the parameters derived from textural analysis describing tumor tracer heterogeneity at local and regional scales had similar or better reproducibility as simple SUV measurements. These reproducibility results suggest that these FDG PET image derived parameters which have already been shown to have a predictive and prognostic value in certain cancer models, may be used within the context of therapy response monitoring or predicting patient outcome. PMID:22454484
Automatic temperature adjustment apparatus
Chaplin, James E.
1985-01-01
An apparatus for increasing the efficiency of a conventional central space heating system is disclosed. The temperature of a fluid heating medium is adjusted based on a measurement of the external temperature, and a system parameter. The system parameter is periodically modified based on a closed loop process that monitors the operation of the heating system. This closed loop process provides a heating medium temperature value that is very near the optimum for energy efficiency.
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
NASA Astrophysics Data System (ADS)
Zhao, Xiuliang; Cheng, Yong; Wang, Limei; Ji, Shaobo
2017-03-01
Accurate combustion parameters are the foundations of effective closed-loop control of engine combustion process. Some combustion parameters, including the start of combustion, the location of peak pressure, the maximum pressure rise rate and its location, can be identified from the engine block vibration signals. These signals often include non-combustion related contributions, which limit the prompt acquisition of the combustion parameters computationally. The main component in these non-combustion related contributions is considered to be caused by the reciprocating inertia force excitation (RIFE) of engine crank train. A mathematical model is established to describe the response of the RIFE. The parameters of the model are recognized with a pattern recognition algorithm, and the response of the RIFE is predicted and then the related contributions are removed from the measured vibration velocity signals. The combustion parameters are extracted from the feature points of the renovated vibration velocity signals. There are angle deviations between the feature points in the vibration velocity signals and those in the cylinder pressure signals. For the start of combustion, a system bias is adopted to correct the deviation and the error bound of the predicted parameters is within 1.1°. To predict the location of the maximum pressure rise rate and the location of the peak pressure, algorithms based on the proportion of high frequency components in the vibration velocity signals are introduced. Tests results show that the two parameters are able to be predicted within 0.7° and 0.8° error bound respectively. The increase from the knee point preceding the peak value point to the peak value in the vibration velocity signals is used to predict the value of the maximum pressure rise rate. Finally, a monitoring frame work is inferred to realize the combustion parameters prediction. Satisfactory prediction for combustion parameters in successive cycles is achieved, which validates the proposed methods.
Characteristics of middle and upper tropospheric clouds as deduced from rawinsonde data
NASA Technical Reports Server (NTRS)
Starr, D. D. O.; Cox, S. K.
1982-01-01
The static environment of middle and upper tropospheric clouds is characterized. Computed relative humidity with respect to ice is used to diagnose the presence of cloud layer. The deduced seasonal mean cloud cover estimates based on this technique are shown to be reasonable. The cases are stratified by season and pressure thickness, and the dry static stability, vertical wind speed shear, and Richardson number are computed for three layers for each case. Mean values for each parameter are presented for each stratification and layer. The relative frequency of occurrence of various structures is presented for each stratification. The observed values of each parameter and the observed structure of each parameter are quite variable. Structures corresponding to any of a number of different conceptual models may be found. Moist adiabatic conditions are not commonly observed and the stratification based on thickness yields substantially different results for each group.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Themeßl, N.; Hekker, S.; Southworth, J.; Beck, P. G.; Pavlovski, K.; Tkachenko, A.; Angelou, G. C.; Ball, W. H.; Barban, C.; Corsaro, E.; Elsworth, Y.; Handberg, R.; Kallinger, T.
2018-05-01
The internal structures and properties of oscillating red-giant stars can be accurately inferred through their global oscillation modes (asteroseismology). Based on 1460 days of Kepler observations we perform a thorough asteroseismic study to probe the stellar parameters and evolutionary stages of three red giants in eclipsing binary systems. We present the first detailed analysis of individual oscillation modes of the red-giant components of KIC 8410637, KIC 5640750 and KIC 9540226. We obtain estimates of their asteroseismic masses, radii, mean densities and logarithmic surface gravities by using the asteroseismic scaling relations as well as grid-based modelling. As these red giants are in double-lined eclipsing binaries, it is possible to derive their independent dynamical masses and radii from the orbital solution and compare it with the seismically inferred values. For KIC 5640750 we compute the first spectroscopic orbit based on both components of this system. We use high-resolution spectroscopic data and light curves of the three systems to determine up-to-date values of the dynamical stellar parameters. With our comprehensive set of stellar parameters we explore consistencies between binary analysis and asteroseismic methods, and test the reliability of the well-known scaling relations. For the three red giants under study, we find agreement between dynamical and asteroseismic stellar parameters in cases where the asteroseismic methods account for metallicity, temperature and mass dependence as well as surface effects. We are able to attain agreement from the scaling laws in all three systems if we use Δνref, emp = 130.8 ± 0.9 μHz instead of the usual solar reference value.
NASA Astrophysics Data System (ADS)
Doury, Maxime; Dizeux, Alexandre; de Cesare, Alain; Lucidarme, Olivier; Pellot-Barakat, Claire; Bridal, S. Lori; Frouin, Frédérique
2017-02-01
Dynamic contrast-enhanced ultrasound has been proposed to monitor tumor therapy, as a complement to volume measurements. To assess the variability of perfusion parameters in ideal conditions, four consecutive test-retest studies were acquired in a mouse tumor model, using controlled injections. The impact of mathematical modeling on parameter variability was then investigated. Coefficients of variation (CV) of tissue blood volume (BV) and tissue blood flow (BF) based-parameters were estimated inside 32 sub-regions of the tumors, comparing the log-normal (LN) model with a one-compartment model fed by an arterial input function (AIF) and improved by the introduction of a time delay parameter. Relative perfusion parameters were also estimated by normalization of the LN parameters and normalization of the one-compartment parameters estimated with the AIF, using a reference tissue (RT) region. A direct estimation (rRTd) of relative parameters, based on the one-compartment model without using the AIF, was also obtained by using the kinetics inside the RT region. Results of test-retest studies show that absolute regional parameters have high CV, whatever the approach, with median values of about 30% for BV, and 40% for BF. The positive impact of normalization was established, showing a coherent estimation of relative parameters, with reduced CV (about 20% for BV and 30% for BF using the rRTd approach). These values were significantly lower (p < 0.05) than the CV of absolute parameters. The rRTd approach provided the smallest CV and should be preferred for estimating relative perfusion parameters.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
A dual theory of price and value in a meso-scale economic model with stochastic profit rate
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2014-12-01
The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.
Comparison and optimization of radar-based hail detection algorithms in Slovenia
NASA Astrophysics Data System (ADS)
Stržinar, Gregor; Skok, Gregor
2018-05-01
Four commonly used radar-based hail detection algorithms are evaluated and optimized in Slovenia. The algorithms are verified against ground observations of hail at manned stations in the period between May and August, from 2002 to 2010. The algorithms are optimized by determining the optimal values of all possible algorithm parameters. A number of different contingency-table-based scores are evaluated with a combination of Critical Success Index and frequency bias proving to be the best choice for optimization. The best performance indexes are given by Waldvogel and the severe hail index, followed by vertically integrated liquid and maximum radar reflectivity. Using the optimal parameter values, a hail frequency climatology map for the whole of Slovenia is produced. The analysis shows that there is a considerable variability of hail occurrence within the Republic of Slovenia. The hail frequency ranges from almost 0 to 1.7 hail days per year with an average value of about 0.7 hail days per year.
NASA Astrophysics Data System (ADS)
Lin, Hong; Wang, Xinming; Liang, Kun
2010-10-01
For monitoring and forecasting of the ocean red tide in real time, a marine environment monitoring technology based on the double-wavelength airborne lidar system is proposed. An airborne lidar is father more efficient than the traditional measure technology by the boat. At the same time, this technology can detect multi-parameter about the ocean red tide by using the double-wavelength lidar.It not only can use the infrared laser to detect the scattering signal under the water and gain the information about the red tise's density and size, but also can use the blue-green laser to detect the Brillouin scattering signal and deduce the temperature and salinity of the seawater.The red tide's density detecting model is firstly established by introducing the concept about the red tide scattering coefficient based on the Mie scattering theory. From the Brillouin scattering theory, the relationship about the blue-green laser's Brillouin scattering frequency shift value and power value with the seawater temperature and salinity is found. Then, the detecting mode1 of the saewater temperature and salinity can be established. The value of the red tide infrared scattering signal is evaluated by the simulation, and therefore the red tide particles' density can be known. At the same time, the blue-green laser's Brillouin scattering frequency shift value and power value are evaluated by simulating, and the temperature and salinity of the seawater can be known. Baed on the multi-parameters, the ocean red tide's growth can be monitored and forecasted.
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).
NASA Astrophysics Data System (ADS)
Bayrak, Erdem; Yılmaz, Şeyda; Bayrak, Yusuf
2017-05-01
The temporal and spatial variations of Gutenberg-Richter parameter (b-value) and fractal dimension (DC) during the period 1900-2010 in Western Anatolia was investigated. The study area is divided into 15 different source zones based on their tectonic and seismotectonic regimes. We calculated the temporal variation of b and DC values in each region using Zmap. The temporal variation of these parameters for the prediction of major earthquakes was calculated. The spatial distribution of these parameters is related to the stress levels of the faults. We observed that b and DC values change before the major earthquakes in the 15 seismic regions. To evaluate the spatial distribution of b and DC values, 0.50° × 0.50° grid interval were used. The b-values smaller than 0.70 are related to the Aegean Arc and Eskisehir Fault. The highest values are related to Sultandağı and Sandıklı Faults. Fractal correlation dimension varies from 1.65 to 2.60, which shows that the study area has a higher DC value. The lowest DC values are related to the joining area between Aegean and Cyprus arcs, Burdur-Fethiye fault zone. Some have concluded that b-values drop instantly before large shocks. Others suggested that temporally stable low b value zones identify future large earthquake locations. The results reveal that large earthquakes occur when b decreases and DC increases, suggesting that variation of b and DC can be used as an earthquake precursor. Mapping of b and DC values provide information about the state of stress in the region, i.e. lower b and higher DC values associated with epicentral areas of large earthquakes.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
Post-Newtonian parameter γ in generalized non-local gravity
NASA Astrophysics Data System (ADS)
Zhang, Xue; Wu, YaBo; Yang, WeiQiang; Zhang, ChengYuan; Chen, BoHai; Zhang, Nan
2017-10-01
We investigate the post-Newtonian parameter γ and derive its formalism in generalized non-local (GNL) gravity, which is the modified theory of general relativity (GR) obtained by adding a term m 2 n-2 R☐-n R to the Einstein-Hilbert action. Concretely, based on parametrizing the generalized non-local action in which gravity is described by a series of dynamical scalar fields ϕ i in addition to the metric tensor g μν, the post-Newtonian limit is computed, and the effective gravitational constant as well as the post-Newtonian parameters are directly obtained from the generalized non-local gravity. Moreover, by discussing the values of the parametrized post-Newtonian parameters γ, we can compare our expressions and results with those in Hohmann and Järv et al. (2016), as well as current observational constraints on the values of γ in Will (2006). Hence, we draw restrictions on the nonminimal coupling terms F̅ around their background values.
Load controller and method to enhance effective capacity of a photovoltaic power supply
Perez, Richard
2000-01-01
A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply. The renewable supply may comprise, for example, a photovoltaic power supply or a wind-based power supply.
Al-Amri, Mohammad; Al Balushi, Hilal; Mashabi, Abdulrhman
2017-12-01
Self-paced treadmill walking is becoming increasingly popular for the gait assessment and re-education, in both research and clinical settings. Its day-to-day repeatability is yet to be established. This study scrutinised the test-retest repeatability of key gait parameters, obtained from the Gait Real-time Analysis Interactive Lab (GRAIL) system. Twenty-three male able-bodied adults (age: 34.56 ± 5.12 years) completed two separate gait assessments on the GRAIL system, separated by 5 ± 3 days. Key gait kinematic, kinetic, and spatial-temporal parameters were analysed. The Intraclass-Correlation Coefficients (ICC), Standard Error Measurement (SEM), Minimum Detectable Change (MDC), and the 95% limits of agreements were calculated to evaluate the repeatability of these gait parameters. Day-to-day agreements were excellent (ICCs > 0.87) for spatial-temporal parameters with low MDC and SEM values, <0.153 and <0.055, respectively. The repeatability was higher for joint kinetic than kinematic parameters, as reflected in small values of SEM (<0.13 Nm/kg and <3.4°) and MDC (<0.335 Nm/kg and <9.44°). The obtained values of all parameters fell within the 95% limits of agreement. Our findings demonstrate the repeatability of the GRAIL system available in our laboratory. The SEM and MDC values can be used to assist researchers and clinicians to distinguish 'real' changes in gait performance over time.
Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C.
2014-01-01
Background and Aims Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Methods Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. Key Results To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait ‘total crop nitrogen uptake’ (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10–36 % more yield than those based on markers for yield per se. Conclusions This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. PMID:24984712
Le Huec, Jean Charles; Hasegawa, Kazuhiro
2016-11-01
Sagittal balance analysis has gained importance and the measure of the radiographic spinopelvic parameters is now a routine part of many interventions of spine surgery. Indeed, surgical correction of lumbar lordosis must be proportional to the pelvic incidence (PI). The compensatory mechanisms [pelvic retroversion with increased pelvic tilt (PT) and decreased thoracic kyphosis] spontaneously reverse after successful surgery. This study is the first to provide 3D standing spinopelvic reference values from a large database of Caucasian (n = 137) and Japanese (n = 131) asymptomatic subjects. The key spinopelvic parameters [e.g., PI, PT, sacral slope (SS)] were comparable in Japanese and Caucasian populations. Three equations, namely lumbar lordosis based on PI, PT based on PI and SS based on PI, were calculated after linear regression modeling and were comparable in both populations: lumbar lordosis (L1-S1) = 0.54*PI + 27.6, PT = 0.44*PI - 11.4 and SS = 0.54*PI + 11.90. We showed that the key spinopelvic parameters obtained from a large database of healthy subjects were comparable for Causasian and Japanese populations. The normative values provided in this study and the equations obtained after linear regression modeling could help to estimate pre-operatively the lumbar lordosis restoration and could be also used as guidelines for spinopelvic sagittal balance.
Determination of the Landau Lifshitz damping parameter of composite magnetic fluids
NASA Astrophysics Data System (ADS)
Fannin, P. C.; Malaescu, I.; Marin, C. N.
2007-01-01
Measurements of the frequency dependent, complex magnetic susceptibility, χ(ω)= χ‧( ω)- iχ″( ω), in the GHz range, are used to investigate the effect which the mixing of two different magnetic fluids has on the value of the damping parameter, α, of the Landau-Lifshitz equation. The magnetic fluid samples investigated in this study were three kerosene-based magnetic fluids, stabilised with oleic acid, denoted as MF1, MF2 and MF3. Sample MF1 was a magnetic fluid with Mn 0.6Fe 0.4Fe 2O 4 particles, sample MF2 was a magnetic fluid with Ni 0.4Zn 0.6Fe 2O 4 particles and sample MF3 was a composite magnetic fluid obtained by mixing a part of sample MF1 with a part of sample MF2, in proportion of 1:1. The experimental results revealed that the value of the damping parameter of the composite sample (sample MF3) is between the α values obtained for its constituents (samples MF1 and MF2). Based on the superposition principle, which states that the susceptibility of a magnetic fluid sample is a superposition of individual contributions of the magnetic particles, a theoretical model is proposed. The experimental results are shown to be in close agreement with the theoretical results. This result is potentially useful in the design of microwave-operating materials, in that it enables one to determine a particular value of damping parameter.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Value of eddy-covariance data for individual-based, forest gap models
NASA Astrophysics Data System (ADS)
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2014-05-01
Individual-based forest gap models simulate tree growth and carbon fluxes on large time scales. They are a well established tool to predict forest dynamics and successions. However, the effect of climatic variables on processes of such individual-based models is uncertain (e.g. the effect of temperature or soil moisture on the gross primary production (GPP)). Commonly, functional relationships and parameter values that describe the effect of climate variables on the model processes are gathered from various vegetation models of different spatial scales. Though, their accuracies and parameter values have not been validated for the specific model scales of individual-based forest gap models. In this study, we address this uncertainty by linking Eddy-covariance (EC) data and a forest gap model. The forest gap model FORMIND is applied on the Norwegian spruce monoculture forest at Wetzstein in Thuringia, Germany for the years 2003-2008. The original parameterizations of climatic functions are adapted according to the EC-data. The time step of the model is reduced to one day in order to adapt to the high resolution EC-data. The FORMIND model uses functional relationships on an individual level, whereas the EC-method measures eco-physiological responses at the ecosystem level. However, we assume that in homogeneous stands as in our study, functional relationships for both methods are comparable. The model is then validated at the spruce forest Waldstein, Germany. Results show that the functional relationships used in the model, are similar to those observed with the EC-method. The temperature reduction curve is well reflected in the EC-data, though parameter values differ from the originally expected values. For example at the freezing point, the observed GPP is 30% higher than predicted by the forest gap model. The response of observed GPP to soil moisture shows that the permanent wilting point is 7 vol-% lower than the value derived from the literature. The light response curve, integrated over the canopy and the forest stand, is underestimated compared to the measured data. The EC-method measures a yearly carbon balance of 13 mol(CO2)m-2 for the Wetzstein site. The model with the original parameterization overestimates the yearly carbon balance by nearly 5 mol(CO2)m-2 while the model with an EC-based parameterization fits the measured data very well. The parameter values derived from EC-data are applied on the spruce forest Waldstein and clearly improve estimates of the carbon balance.
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Wang, Hai-yi; Su, Zi-hua; Xu, Xiao; Sun, Zhi-peng; Duan, Fei-xue; Song, Yuan-yuan; Li, Lu; Wang, Ying-wei; Ma, Xin; Guo, Ai-tao; Ma, Lin; Ye, Hui-yi
2016-01-01
Pharmacokinetic parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) have been increasingly used to evaluate the permeability of tumor vessel. Histogram metrics are a recognized promising method of quantitative MR imaging that has been recently introduced in analysis of DCE-MRI pharmacokinetic parameters in oncology due to tumor heterogeneity. In this study, 21 patients with renal cell carcinoma (RCC) underwent paired DCE-MRI studies on a 3.0 T MR system. Extended Tofts model and population-based arterial input function were used to calculate kinetic parameters of RCC tumors. Mean value and histogram metrics (Mode, Skewness and Kurtosis) of each pharmacokinetic parameter were generated automatically using ImageJ software. Intra- and inter-observer reproducibility and scan–rescan reproducibility were evaluated using intra-class correlation coefficients (ICCs) and coefficient of variation (CoV). Our results demonstrated that the histogram method (Mode, Skewness and Kurtosis) was not superior to the conventional Mean value method in reproducibility evaluation on DCE-MRI pharmacokinetic parameters (K trans & Ve) in renal cell carcinoma, especially for Skewness and Kurtosis which showed lower intra-, inter-observer and scan-rescan reproducibility than Mean value. Our findings suggest that additional studies are necessary before wide incorporation of histogram metrics in quantitative analysis of DCE-MRI pharmacokinetic parameters. PMID:27380733
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tan, Xia; Ji, Zhong; Zhang, Yadan
2018-04-25
Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.
The Value of Information in Decision-Analytic Modeling for Malaria Vector Control in East Africa.
Kim, Dohyeong; Brown, Zachary; Anderson, Richard; Mutero, Clifford; Miranda, Marie Lynn; Wiener, Jonathan; Kramer, Randall
2017-02-01
Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced-based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5-21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria-transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs. © 2016 Society for Risk Analysis.
Gamma dosimetric parameters in some skeletal muscle relaxants
NASA Astrophysics Data System (ADS)
Manjunatha, H. C.
2017-09-01
We have studied the attenuation of gamma radiation of energy ranging from 84 keV to 1330 keV (^{170}Tm, ^{22}Na,^{137}Cs, and ^{60}Co) in some commonly used skeletal muscle relaxants such as tubocurarine chloride, gallamine triethiodide, pancuronium bromide, suxamethonium bromide and mephenesin. The mass attenuation coefficient is measured from the attenuation experiment. In the present work, we have also proposed the direct relation between mass attenuation coefficient (μ /ρ ) and mass energy absorption coefficient (μ _{en}/ρ ) based on the nonlinear fitting procedure. The gamma dosimetric parameters such as mass energy absorption coefficient (μ _{en}/ρ ), effective atomic number (Z_{eff}), effective electron density (N_{el}), specific γ-ray constant, air kerma strength and dose rate are evaluated from the measured mass attentuation coefficient. These measured gamma dosimetric parameters are compared with the theoretical values. The measured values agree with the theoretical values. The studied gamma dosimetric values for the relaxants are useful in medical physics and radiation medicine.
Pitch features of environmental sounds
NASA Astrophysics Data System (ADS)
Yang, Ming; Kang, Jian
2016-07-01
A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Progress in multirate digital control system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Permutation on hybrid natural inflation
NASA Astrophysics Data System (ADS)
Carone, Christopher D.; Erlich, Joshua; Ramos, Raymundo; Sher, Marc
2014-09-01
We analyze a model of hybrid natural inflation based on the smallest non-Abelian discrete group S3. Leading invariant terms in the scalar potential have an accidental global symmetry that is spontaneously broken, providing a pseudo-Goldstone boson that is identified as the inflaton. The S3 symmetry restricts both the form of the inflaton potential and the couplings of the inflaton field to the waterfall fields responsible for the end of inflation. We identify viable points in the model parameter space. Although the power in tensor modes is small in most of the parameter space of the model, we identify parameter choices that yield potentially observable values of r without super-Planckian initial values of the inflaton field.
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
Control of complex dynamics and chaos in distributed parameter systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarti, S.; Marek, M.; Ray, W.H.
This paper discusses a methodology for controlling complex dynamics and chaos in distributed parameter systems. The reaction-diffusion system with Brusselator kinetics, where the torus-doubling or quasi-periodic (two characteristic incommensurate frequencies) route to chaos exists in a defined range of parameter values, is used as an example. Poincare maps are used for characterization of quasi-periodic and chaotic attractors. The dominant modes or topos, which are inherent properties of the system, are identified by means of the Singular Value Decomposition. Tested modal feedback control schemas based on identified dominant spatial modes confirm the possibility of stabilization of simple quasi-periodic trajectories in themore » complex quasi-periodic or chaotic spatiotemporal patterns.« less
Bearing damage assessment using Jensen-Rényi Divergence based on EEMD
NASA Astrophysics Data System (ADS)
Singh, Jaskaran; Darpe, A. K.; Singh, S. P.
2017-03-01
An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.
NASA Astrophysics Data System (ADS)
Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.
2013-08-01
Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Chew, Sook Chin; Tan, Chin Ping; Nyam, Kar Lin
2017-07-01
Kenaf seed oil has been suggested to be used as nutritious edible oil due to its unique fatty acid composition and nutritional value. The objective of this study was to optimize the bleaching parameters of the chemical refining process for kenaf seed oil, namely concentration of bleaching earth (0.5 to 2.5% w/w), temperature (30 to 110 °C) and time (5 to 65 min) based on the responses of total oxidation value (TOTOX) and color reduction using response surface methodology. The results indicated that the corresponding response surface models were highly statistical significant (P < 0.0001) and sufficient to describe and predict TOTOX value and color reduction with R 2 of 0.9713 and 0.9388, respectively. The optimal parameters in the bleaching stage of kenaf seed oil were: 1.5% w/w of the concentration of bleaching earth, temperature of 70 °C, and time of 40 min. These optimum parameters produced bleached kenaf seed oil with TOTOX value of 8.09 and color reduction of 32.95%. There were no significant differences (P > 0.05) between experimental and predicted values, indicating the adequacy of the fitted models. © 2017 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Stepanova, L. V.
2017-12-01
Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is the Embedded Atom Method (EAM) potential. Plane specimens with an initial central crack are subjected to mixed-mode loadings. The simulation cell contains 400,000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide range of temperatures (from 0.1 K to 800 K) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields. The multi-parameter fracture criteria are based on the multi-parameter stress field description taking into account the higher order terms of the Williams series expansion of the crack tip fields.
Zonta, Zivko J; Flotats, Xavier; Magrí, Albert
2014-08-01
The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
The brilliant blue FCF ion-molecular forms in solutions according to the spectrophotometry data
NASA Astrophysics Data System (ADS)
Chebotarev, A. N.; Bevziuk, K. V.; Snigur, D. V.; Bazel, Ya. R.
2017-10-01
The brilliant blue FCF acid-base properties in aqueous solutions have been studied and its ionization constants have been defined by tristimulus colorimetry and spectrophotometry methods. The scheme of the acid-base dye equilibrium has been proposed and a diagram of the distribution of its ionic-molecular forms has been built. It has been established that the dominant form of the dye was the electroneutral form, which molar absorptivity (ɛ625 = 0.97 × 105) increases with the increase of the dielectric permittivity of the solvent. It has been shown that the replacement of polar solvents by less polar ones is causing a bathochromic shift of the maximum absorption band of the dye, the value of which is correlated with the value of the Hansen parameter. Tautomerization constants have been defined in a number of solvents and associated with the value of the Dimroth-Reichardt parameter.
Evaluation of weather-based rice yield models in India
NASA Astrophysics Data System (ADS)
Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.
2013-01-01
The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Measurement of the geometric parameters of power contact wire based on binocular stereovision
NASA Astrophysics Data System (ADS)
Pan, Xue-Tao; Zhang, Ya-feng; Meng, Fei
2010-10-01
In the electrified railway power supply system, electric locomotive obtains power from the catenary's wire through the pantograph. Under the action of the pantograph, combined with various factors such as vibration, touch current, relative sliding speed, load, etc, the contact wire will produce mechanical wear and electrical wear. Thus, in electrified railway construction and daily operations, the geometric parameters such as line height, pull value, the width of wear surface must be under real-timely and non-contact detection. On the one hand, the safe operation of electric railways will be guaranteed; on the other hand, the wire endurance will be extended, and operating costs reduced. Based on the characteristics of the worn wires' image signal, the binocular stereo vision technology was applied for measurement of contact wire geometry parameters, a mathematical model of measurement of geometric parameters was derived, and the boundaries of the wound wire abrasion-point value were extracted by means of sub-pixel edge detection method based on the LOG operator with the least-squares fitting, thus measurements of the wire geometry parameters were realized. Principles were demonstrated through simulation experiments, and the experimental results show that the detection methods presented in this paper for measuring the accuracy, efficiency and convenience, etc. are close to or superior to the traditional measurements, which has laid a good foundation for the measurement system of geometric parameters for the contact wire of the development of binocular vision.
NASA Astrophysics Data System (ADS)
Poorvasha, S.; Lakshmi, B.
2018-05-01
In this paper, RF performance analysis of InAs-based double gate (DG) tunnel field effect transistors (TFETs) is investigated in both qualitative and quantitative fashion. This investigation is carried out by varying the geometrical and doping parameters of TFETs to extract various RF parameters, unity gain cut-off frequency (f t), maximum oscillation frequency (f max), intrinsic gain and admittance (Y) parameters. An asymmetric gate oxide is introduced in the gate-drain overlap and compared with that of DG TFETs. Higher ON-current (I ON) of about 0.2 mA and less leakage current (I OFF) of 29 fA is achieved for DG TFET with gate-drain overlap. Due to increase in transconductance (g m), higher f t and intrinsic gain is attained for DG TFET with gate-drain overlap. Higher f max of 985 GHz is obtained for drain doping of 5 × 1017 cm‑3 because of the reduced gate-drain capacitance (C gd) with DG TFET with gate-drain overlap. In terms of Y-parameters, gate oxide thickness variation offers better performance due to the reduced values of C gd. A second order numerical polynomial model is generated for all the RF responses as a function of geometrical and doping parameters. The simulation results are compared with this numerical model where the predicted values match with the simulated values. Project supported by the Department of Science and Technology, Government of India under SERB Scheme (No. SERB/F/2660).
Radhapriya, P; NavaneethaGopalakrishnan, A; Malini, P; Ramachandran, A
2012-05-01
Being the second largest manufacturing industry in India, cement industry is one of the major contributors of suspended particulate matter (SPM). Since plants are sensitive to air pollution, introducing suitable plant species as part of the greenbelt around cement industry was the objective of the present study. Suitable plant species were selected based on the Air pollution tolerance index (APTI) calculated by analyzing ascorbic acid (AA), pH, relative water content (RWC) and total chlorophyll (TChl) of the plants occuring in the locality. Plants were selected within a 6 km radius from the industry and were graded as per their tolerance levels by analyzing the biochemical parameters. From the statistical analysis at 0.05 level of significance a difference in the APTI values among the 27 plant species was observed, but they showed homogenous results when analysed zone wise using one-way analyses of variance. Analyses of individual parameters showed variation in the different zones surrounding the cement industry, whereas the APTI value (which is a combination of the parameter viz. AA, RWC, TChl, pH) showed more or less same gradation. Significant variation in individual parameters and APTI was seen with in the species. All the plants surrounding the cement industry are indicative of high pollution exposure comparable to the results obtain for control plants. Based on the APTI value, it was observed that about 37% of the plant species were tolerant. Among them Mangifera indica, Bougainvillea species, Psidum quajava showed high APTI values. 33% of the species were highly susceptible to the adverse effects of SPM, among which Thevetia neriifolia, Saraca indica, Phyllanthus emblica and Cercocarpus ledifolius showed low APTI values. 15% each of the species were at the intermediary and moderate tolerance levels.
Derivation of the spin-glass order parameter from stochastic thermodynamics
NASA Astrophysics Data System (ADS)
Crisanti, A.; Picco, M.; Ritort, F.
2018-05-01
A fluctuation relation is derived to extract the order parameter function q (x ) in weakly ergodic systems. The relation is based on measuring and classifying entropy production fluctuations according to the value of the overlap q between configurations. For a fixed value of q , entropy production fluctuations are Gaussian distributed allowing us to derive the quasi-FDT so characteristic of aging systems. The theory is validated by extracting the q (x ) in various types of glassy models. It might be generally applicable to other nonequilibrium systems and experimental small systems.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Determining the pH of Mars from the Viking labelled release reabsorption effect
NASA Technical Reports Server (NTRS)
Plumb, Robert C.
1992-01-01
The acid-base properties and redox potentials of solids are two of the more fundamental chemical parameters characterizing a material. Knowledge of these parameters for martian regolith fines would be of considerable value in determining what specific compounds are present and making judgements on what reactions are possible.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo
2015-11-20
This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Acoustic energy relations in Mudejar-Gothic churches.
Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel
2007-01-01
Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
NASA Astrophysics Data System (ADS)
Oshmarin, D.; Sevodina, N.; Iurlov, M.; Iurlova, N.
2017-06-01
In this paper, with the aim of providing passive control of structure vibrations a new approach has been proposed for selecting optimal parameters of external electric shunt circuits connected to piezoelectric elements located on the surface of the structure. The approach is based on the mathematical formulation of the natural vibration problem. The results of solution of this problem are the complex eigenfrequencies, the real part of which represents the vibration frequency and the imaginary part corresponds to the damping ratio, characterizing the rate of damping. A criterion of search for optimal parameters of the external passive shunt circuits, which can provide the system with desired dissipative properties, has been derived based on the analysis of responses of the real and imaginary parts of different complex eigenfrequencies to changes in the values of the parameters of the electric circuit. The efficiency of this approach has been verified in the context of natural vibration problem of rigidly clamped plate and semi-cylindrical shell, which is solved for series-connected and parallel -connected external resonance (consisting of resistive and inductive elements) R-L circuits. It has been shown that at lower (more energy-intensive) frequencies, a series-connected external circuit has the advantage of providing lower values of the circuit parameters, which renders it more attractive in terms of practical applications.
Braun, Alexandra C; Ilko, David; Merget, Benjamin; Gieseler, Henning; Germershaus, Oliver; Holzgrabe, Ulrike; Meinel, Lorenz
2015-08-01
This manuscript addresses the capability of compendial methods in controlling polysorbate 80 (PS80) functionality. Based on the analysis of sixteen batches, functionality related characteristics (FRC) including critical micelle concentration (CMC), cloud point, hydrophilic-lipophilic balance (HLB) value and micelle molecular weight were correlated to chemical composition including fatty acids before and after hydrolysis, content of non-esterified polyethylene glycols and sorbitan polyethoxylates, sorbitan- and isosorbide polyethoxylate fatty acid mono- and diesters, polyoxyethylene diesters, and peroxide values. Batches from some suppliers had a high variability in functionality related characteristic (FRC), questioning the ability of the current monograph in controlling these. Interestingly, the combined use of the input parameters oleic acid content and peroxide value - both of which being monographed methods - resulted in a model adequately predicting CMC. Confining the batches to those complying with specifications for peroxide value proved oleic acid content alone as being predictive for CMC. Similarly, a four parameter model based on chemical analyses alone was instrumental in predicting the molecular weight of PS80 micelles. Improved models based on analytical outcome from fingerprint analyses are also presented. A road map controlling PS80 batches with respect to FRC and based on chemical analyses alone is provided for the formulator. Copyright © 2014 Elsevier B.V. All rights reserved.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
THEORETICAL RESEARCH OF THE OPTICAL SPECTRA AND EPR PARAMETERS FOR Cs2NaYCl6:Dy3+ CRYSTAL
NASA Astrophysics Data System (ADS)
Dong, Hui-Ning; Dong, Meng-Ran; Li, Jin-Jin; Li, Deng-Feng; Zhang, Yi
2013-09-01
The calculated EPR parameters are in reasonable agreement with the observed values. The important material Cs2NaYCl6 doped with rare earth ions have received much attention because of its excellent optical and magnetic properties. Based on the superposition model, in this paper the crystal field energy levels, the electron paramagnetic resonance parameters g factors of Dy3+ and hyperfine structure constants of 161Dy3+ and 163Dy3+ isotopes in Cs2NaYCl6 crystal are studied by diagonalizing the 42 × 42 energy matrix. In the calculations, the contributions of various admixtures and interactions such as the J-mixing, the mixtures among the states with the same J-value, and the covalence are all considered. The calculated results are in reasonable agreement with the observed values. The results are discussed.
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Tonbul, H.; Kavzoglu, T.
2016-12-01
In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.
Failure modes in electroactive polymer thin films with elastic electrodes
NASA Astrophysics Data System (ADS)
De Tommasi, D.; Puglisi, G.; Zurlo, G.
2014-02-01
Based on an energy minimization approach, we analyse the elastic deformations of a thin electroactive polymer (EAP) film sandwiched by two elastic electrodes with non-negligible stiffness. We analytically show the existence of a critical value of the electrode voltage for which non-homogeneous solutions bifurcate from the homogeneous equilibrium state, leading to the pull-in phenomenon. This threshold strongly decreases the limit value proposed in the literature considering only homogeneous deformations. We explicitly discuss the influence of geometric and material parameters together with boundary conditions in the attainment of the different failure modes observed in EAP devices. In particular, we obtain the optimum values of these parameters leading to the maximum activation performances of the device.
Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui
2017-01-01
The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.
Adaptive control of servo system based on LuGre model
NASA Astrophysics Data System (ADS)
Jin, Wang; Niancong, Liu; Jianlong, Chen; Weitao, Geng
2018-03-01
This paper established a mechanical model of feed system based on LuGre model. In order to solve the influence of nonlinear factors on the system running stability, a nonlinear single observer is designed to estimate the parameter z in the LuGre model and an adaptive friction compensation controller is designed. Simulink simulation results show that the control method can effectively suppress the adverse effects of friction and external disturbances. The simulation show that the adaptive parameter kz is between 0.11-0.13, and the value of gamma1 is between 1.9-2.1. Position tracking error reaches level 10-3 and is stabilized near 0 values within 0.3 seconds, the compensation method has better tracking accuracy and robustness.
Neutrino parameters from reactor and accelerator neutrino experiments
NASA Astrophysics Data System (ADS)
Lindner, Manfred; Rodejohann, Werner; Xu, Xun-Jie
2018-04-01
We revisit correlations of neutrino oscillation parameters in reactor and long-baseline neutrino oscillation experiments. A framework based on an effective value of θ13 is presented, which can be used to analytically study the correlations and explain some questions including why and when δC P has the best fit value of -π /2 , why current and future long-baseline experiments will have less precision of δC P around ±π /2 than that around zero, etc. Recent hints on the C P phase are then considered from the point of view that different reactor and long-baseline neutrino experiments provide currently different best-fit values of θ13 and θ23. We point out that the significance of the hints changes for the different available best-fit values.
van Leeuwen, C M; Oei, A L; Crezee, J; Bel, A; Franken, N A P; Stalpers, L J A; Kok, H P
2018-05-16
Prediction of radiobiological response is a major challenge in radiotherapy. Of several radiobiological models, the linear-quadratic (LQ) model has been best validated by experimental and clinical data. Clinically, the LQ model is mainly used to estimate equivalent radiotherapy schedules (e.g. calculate the equivalent dose in 2 Gy fractions, EQD 2 ), but increasingly also to predict tumour control probability (TCP) and normal tissue complication probability (NTCP) using logistic models. The selection of accurate LQ parameters α, β and α/β is pivotal for a reliable estimate of radiation response. The aim of this review is to provide an overview of published values for the LQ parameters of human tumours as a guideline for radiation oncologists and radiation researchers to select appropriate radiobiological parameter values for LQ modelling in clinical radiotherapy. We performed a systematic literature search and found sixty-four clinical studies reporting α, β and α/β for tumours. Tumour site, histology, stage, number of patients, type of LQ model, radiation type, TCP model, clinical endpoint and radiobiological parameter estimates were extracted. Next, we stratified by tumour site and by tumour histology. Study heterogeneity was expressed by the I 2 statistic, i.e. the percentage of variance in reported values not explained by chance. A large heterogeneity in LQ parameters was found within and between studies (I 2 > 75%). For the same tumour site, differences in histology partially explain differences in the LQ parameters: epithelial tumours have higher α/β values than adenocarcinomas. For tumour sites with different histologies, such as in oesophageal cancer, the α/β estimates correlate well with histology. However, many other factors contribute to the study heterogeneity of LQ parameters, e.g. tumour stage, type of LQ model, TCP model and clinical endpoint (i.e. survival, tumour control and biochemical control). The value of LQ parameters for tumours as published in clinical radiotherapy studies depends on many clinical and methodological factors. Therefore, for clinical use of the LQ model, LQ parameters for tumour should be selected carefully, based on tumour site, histology and the applied LQ model. To account for uncertainties in LQ parameter estimates, exploring a range of values is recommended.
SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesteruk, M; Riesterer, O; Veit-Haibach, P
2016-06-15
Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
3-D Quantitative Dynamic Contrast Ultrasound for Prostate Cancer Localization.
Schalk, Stefan G; Huang, Jing; Li, Jia; Demi, Libertario; Wijkstra, Hessel; Huang, Pintong; Mischi, Massimo
2018-04-01
To investigate quantitative 3-D dynamic contrast-enhanced ultrasound (DCE-US) and, in particular 3-D contrast-ultrasound dispersion imaging (CUDI), for prostate cancer detection and localization, 43 patients referred for 10-12-core systematic biopsy underwent 3-D DCE-US. For each 3-D DCE-US recording, parametric maps of CUDI-based and perfusion-based parameters were computed. The parametric maps were divided in regions, each corresponding to a biopsy core. The obtained parameters were validated per biopsy location and after combining two or more adjacent regions. For CUDI by correlation (r) and for the wash-in time (WIT), a significant difference in parameter values between benign and malignant biopsy cores was found (p < 0.001). In a per-prostate analysis, sensitivity and specificity were 94% and 50% for r, and 53% and 81% for WIT. Based on these results, it can be concluded that quantitative 3-D DCE-US could aid in localizing prostate cancer. Therefore, we recommend follow-up studies to investigate its value for targeting biopsies. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Optical phantoms with adjustable subdiffusive scattering parameters
NASA Astrophysics Data System (ADS)
Krauter, Philipp; Nothelfer, Steffen; Bodenschatz, Nico; Simon, Emanuel; Stocker, Sabrina; Foschum, Florian; Kienle, Alwin
2015-10-01
A new epoxy-resin-based optical phantom system with adjustable subdiffusive scattering parameters is presented along with measurements of the intrinsic absorption, scattering, fluorescence, and refractive index of the matrix material. Both an aluminium oxide powder and a titanium dioxide dispersion were used as scattering agents and we present measurements of their scattering and reduced scattering coefficients. A method is theoretically described for a mixture of both scattering agents to obtain continuously adjustable anisotropy values g between 0.65 and 0.9 and values of the phase function parameter γ in the range of 1.4 to 2.2. Furthermore, we show absorption spectra for a set of pigments that can be added to achieve particular absorption characteristics. By additional analysis of the aging, a fully characterized phantom system is obtained with the novelty of g and γ parameter adjustment.
General Analytical Procedure for Determination of Acidity Parameters of Weak Acids and Bases
Pilarski, Bogusław; Kaliszan, Roman; Wyrzykowski, Dariusz; Młodzianowski, Janusz; Balińska, Agata
2015-01-01
The paper presents a new convenient, inexpensive, and reagent-saving general methodology for the determination of pK a values for components of the mixture of diverse chemical classes weak organic acids and bases in water solution, without the need to separate individual analytes. The data obtained from simple pH-metric microtitrations are numerically processed into reliable pK a values for each component of the mixture. Excellent agreement has been obtained between the determined pK a values and the reference literature data for compounds studied. PMID:25692072
General analytical procedure for determination of acidity parameters of weak acids and bases.
Pilarski, Bogusław; Kaliszan, Roman; Wyrzykowski, Dariusz; Młodzianowski, Janusz; Balińska, Agata
2015-01-01
The paper presents a new convenient, inexpensive, and reagent-saving general methodology for the determination of pK a values for components of the mixture of diverse chemical classes weak organic acids and bases in water solution, without the need to separate individual analytes. The data obtained from simple pH-metric microtitrations are numerically processed into reliable pK a values for each component of the mixture. Excellent agreement has been obtained between the determined pK a values and the reference literature data for compounds studied.
Development of a Three Dimensional Perfectly Matched Layer for Transient Elasto-Dynamic Analyses
2006-12-01
MacLean [Ref. 47] intro- duced a small tracked vehicle with dual inertial mass shakers mounted on top as a mobile source. It excited Rayleigh waves, but...routine initializes and set default values for; * the aplication parameters * the material data base parameters * the entries to appear on the...Underground seismic array experiments. National In- stitute of Nuclear Physics, 2005. [47] D. J. MacLean. Mobile source development for seismic-sonar based
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Monitoring and analysis of data in cyberspace
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M. (Inventor); Angelino, Robert (Inventor)
2001-01-01
Information from monitored systems is displayed in three dimensional cyberspace representations defining a virtual universe having three dimensions. Fixed and dynamic data parameter outputs from the monitored systems are visually represented as graphic objects that are positioned in the virtual universe based on relationships to the system and to the data parameter categories. Attributes and values of the data parameters are indicated by manipulating properties of the graphic object such as position, color, shape, and motion.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Zhang, Zuchen; Song, Jingming; Wu, Chunxiao; Song, Ningfang
2015-03-01
A splicing parameter optimization method to increase the tensile strength of splicing joint between photonic crystal fiber (PCF) and conventional fiber is demonstrated. Based on the splicing recipes provided by splicer or fiber manufacturers, the optimal values of some major splicing parameters are obtained in sequence, and a conspicuous improvement in the mechanical strength of splicing joints between PCFs and conventional fibers is validated through experiments.
Appropriate use of the increment entropy for electrophysiological time series.
Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin
2018-04-01
The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Salari, Marjan; Salami Shahid, Esmaeel; Afzali, Seied Hosein; Ehteshami, Majid; Conti, Gea Oliveri; Derakhshan, Zahra; Sheibani, Solmaz Nikbakht
2018-04-22
Today, due to the increase in the population, the growth of industry and the variety of chemical compounds, the quality of drinking water has decreased. Five important river water quality properties such as: dissolved oxygen (DO), total dissolved solids (TDS), total hardness (TH), alkalinity (ALK) and turbidity (TU) were estimated by parameters such as: electric conductivity (EC), temperature (T), and pH that could be measured easily with almost no costs. Simulate water quality parameters were examined with two methods of modeling include mathematical and Artificial Neural Networks (ANN). Mathematical methods are based on polynomial fitting with least square method and ANN modeling algorithms are feed-forward networks. All conditions/circumstances covered by neural network modeling were tested for all parameters in this study, except for Alkalinity. All optimum ANN models developed to simulate water quality parameters had precision value as R-value close to 0.99. The ANN model extended to simulate alkalinity with R-value equals to 0.82. Moreover, Surface fitting techniques were used to refine data sets. Presented models and equations are reliable/useable tools for studying water quality parameters at similar rivers, as a proper replacement for traditional water quality measuring equipment's. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Compression for an effective management of telemetry data
NASA Technical Reports Server (NTRS)
Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.
1993-01-01
A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.
MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout.
Jiang, Yun; Ma, Dan; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A
2015-12-01
This study explores the possibility of using gradient echo-based sequences other than balanced steady-state free precession (bSSFP) in the magnetic resonance fingerprinting (MRF) framework to quantify the relaxation parameters . An MRF method based on a fast imaging with steady-state precession (FISP) sequence structure is presented. A dictionary containing possible signal evolutions with physiological range of T1 and T2 was created using the extended phase graph formalism according to the acquisition parameters. The proposed method was evaluated in a phantom and a human brain. T1 , T2 , and proton density were quantified directly from the undersampled data by the pattern recognition algorithm. T1 and T2 values from the phantom demonstrate that the results of MRF FISP are in good agreement with the traditional gold-standard methods. T1 and T2 values in brain are within the range of previously reported values. MRF-FISP enables a fast and accurate quantification of the relaxation parameters. It is immune to the banding artifact of bSSFP due to B0 inhomogeneities, which could improve the ability to use MRF for applications beyond brain imaging. © 2014 Wiley Periodicals, Inc.
MR Fingerprinting Using Fast Imaging with Steady State Precession (FISP) with Spiral Readout
Jiang, Yun; Ma, Dan; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A.
2015-01-01
Purpose This study explores the possibility of using gradient echo based sequences other than bSSFP in the magnetic resonance fingerprinting (MRF) framework to quantify the relaxation parameters. Methods An MRF method based on a fast imaging with steady state precession (FISP) sequence structure is presented. A dictionary containing possible signal evolutions with physiological range of T1 and T2 was created using the extended phase graph (EPG) formalism according to the acquisition parameters. The proposed method was evaluated in a phantom and a human brain. T1, T2 and proton density were quantified directly from the undersampled data by the pattern recognition algorithm. Results T1 and T2 values from the phantom demonstrate that the results of MRF FISP are in good agreement with the traditional gold-standard methods. T1 and T2 values in brain are within the range of previously reported values. Conclusion MRF FISP enables a fast and accurate quantification of the relaxation parameters, while is immune to the banding artifact of bSSFP due to B0 inhomogeneities, which could improve the ability to use MRF for applications beyond brain imaging. PMID:25491018
A Novel Scale Up Model for Prediction of Pharmaceutical Film Coating Process Parameters.
Suzuki, Yasuhiro; Suzuki, Tatsuya; Minami, Hidemi; Terada, Katsuhide
2016-01-01
In the pharmaceutical tablet film coating process, we clarified that a difference in exhaust air relative humidity can be used to detect differences in process parameters values, the relative humidity of exhaust air was different under different atmospheric air humidity conditions even though all setting values of the manufacturing process parameters were the same, and the water content of tablets was correlated with the exhaust air relative humidity. Based on this experimental data, the exhaust air relative humidity index (EHI), which is an empirical equation that includes as functional parameters the pan coater type, heated air flow rate, spray rate of coating suspension, saturated water vapor pressure at heated air temperature, and partial water vapor pressure at atmospheric air pressure, was developed. The predictive values of exhaust relative humidity using EHI were in good correlation with the experimental data (correlation coefficient of 0.966) in all datasets. EHI was verified using the date of seven different drug products of different manufacturing scales. The EHI model will support formulation researchers by enabling them to set film coating process parameters when the batch size or pan coater type changes, and without the time and expense of further extensive testing.
Optimization of Protein Backbone Dihedral Angles by Means of Hamiltonian Reweighting
2016-01-01
Molecular dynamics simulations depend critically on the accuracy of the underlying force fields in properly representing biomolecules. Hence, it is crucial to validate the force-field parameter sets in this respect. In the context of the GROMOS force field, this is usually achieved by comparing simulation data to experimental observables for small molecules. In this study, we develop new amino acid backbone dihedral angle potential energy parameters based on the widely used 54A7 parameter set by matching to experimental J values and secondary structure propensity scales. In order to find the most appropriate backbone parameters, close to 100 000 different combinations of parameters have been screened. However, since the sheer number of combinations considered prohibits actual molecular dynamics simulations for each of them, we instead predicted the values for every combination using Hamiltonian reweighting. While the original 54A7 parameter set fails to reproduce the experimental data, we are able to provide parameters that match significantly better. However, to ensure applicability in the context of larger peptides and full proteins, further studies have to be undertaken. PMID:27559757
Protein dielectric constants determined from NMR chemical shift perturbations.
Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik
2013-11-13
Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.
[From evidence-based medicine to value-based medicine].
Zhang, Shao-dan; Liang, Yuan-bo; Li, Si-zhen
2006-11-01
Evidence base medicine (EBM) is based on objective evidence, which provides best available knowledge for physicians to scientifically make medical and therapeutic decisions for the care of all individual patients in order to improve the effectiveness of treatment and to prolong the life of patients. EBM has made a significant progress in clinical practice. But medical therapies cannot always bring a better life quality and clinically, patients' preference should be always taken into account. Value-based medicine medicine (VBM) is the practice of medicine that emphasizes the value received from an intervention. It takes evidence-based data to a higher level by combining the parameters of patient-perceived value with the cost of an intervention. The fundamental instrument of VBM is cost-utility analysis. VBM will provide a better practice model to evaluate the therapeutic package and cost effectiveness for individual and general health care.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
NASA Astrophysics Data System (ADS)
Chrobak, Ł.; Maliński, M.
2018-03-01
This paper presents results of investigations of the possibility of determination of thermal parameters (thermal conductivity, thermal diffusivity) of silicon and silicon germanium crystals from the frequency characteristics of the Photo Thermal Radiometry (PTR) signal. The theoretical analysis of the influence of the mentioned parameters on the PTR signal has been presented and discussed. The values of the thermal and recombination parameters have been extracted from the fittings of the theoretical to experimental data. The presented approach uses the reference Si sample whose thermal and recombination parameters are known.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Identification of atypical flight patterns
NASA Technical Reports Server (NTRS)
Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)
2005-01-01
Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.
Shayegh, Farzaneh; Sadri, Saeed; Amirfattahi, Rassoul; Ansari-Asl, Karim; Bellanger, Jean-Jacques; Senhadji, Lotfi
2014-01-01
In this paper, a model-based approach is presented to quantify the effective synchrony between hippocampal areas from depth-EEG signals. This approach is based on the parameter identification procedure of a realistic Multi-Source/Multi-Channel (MSMC) hippocampal model that simulates the function of different areas of hippocampus. In the model it is supposed that the observed signals recorded using intracranial electrodes are generated by some hidden neuronal sources, according to some parameters. An algorithm is proposed to extract the intrinsic (solely relative to one hippocampal area) and extrinsic (coupling coefficients between two areas) model parameters, simultaneously, by a Maximum Likelihood (ML) method. Coupling coefficients are considered as the measure of effective synchronization. This work can be considered as an application of Dynamic Causal Modeling (DCM) that enables us to understand effective synchronization changes during transition from inter-ictal to pre -ictal state. The algorithm is first validated by using some synthetic datasets. Then by extracting the coupling coefficients of real depth-EEG signals by the proposed approach, it is observed that the coupling values show no significant difference between ictal, pre-ictal and inter-ictal states, i.e., either the increase or decrease of coupling coefficients has been observed in all states. However, taking the value of intrinsic parameters into account, pre-seizure state can be distinguished from inter-ictal state. It is claimed that seizures start to appear when there are seizure-related physiological parameters on the onset channel, and its coupling coefficient toward other channels increases simultaneously. As a result of considering both intrinsic and extrinsic parameters as the feature vector, inter-ictal, pre-ictal and ictal activities are discriminated from each other with an accuracy of 91.33% accuracy. PMID:25061815
Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long; Westberg, Jonas; Berg, Martin; Jensen, Ingelise; Thomsen, Mette Skovhus; Yates, Esben Svitzer; Offersen, Birgitte Vrou
2018-01-01
The current study evaluates the data quality achievable using a national data bank for reporting radiotherapy parameters relative to the classical manual reporting method of selected parameters. The data comparison is based on 1522 Danish patients of the DBCG hypo trial with data stored in the Danish national radiotherapy data bank. In line with standard DBCG trial practice selected parameters were also reported manually to the DBCG database. Categorical variables are compared using contingency tables, and comparison of continuous parameters is presented in scatter plots. For categorical variables 25 differences between the data bank and manual values were located. Of these 23 were related to mistakes in the manual reported value whilst the remaining two were a wrong classification in the data bank. The wrong classification in the data bank was related to lack of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients.
Morphology parameters for intracranial aneurysm rupture risk assessment.
Dhar, Sujan; Tremmel, Markus; Mocco, J; Kim, Minsuok; Yamamoto, Junichi; Siddiqui, Adnan H; Hopkins, L Nelson; Meng, Hui
2008-08-01
The aim of this study is to identify image-based morphological parameters that correlate with human intracranial aneurysm (IA) rupture. For 45 patients with terminal or sidewall saccular IAs (25 unruptured, 20 ruptured), three-dimensional geometries were evaluated for a range of morphological parameters. In addition to five previously studied parameters (aspect ratio, aneurysm size, ellipticity index, nonsphericity index, and undulation index), we defined three novel parameters incorporating the parent vessel geometry (vessel angle, aneurysm [inclination] angle, and [aneurysm-to-vessel] size ratio) and explored their correlation with aneurysm rupture. Parameters were analyzed with a two-tailed independent Student's t test for significance; significant parameters (P < 0.05) were further examined by multivariate logistic regression analysis. Additionally, receiver operating characteristic analyses were performed on each parameter. Statistically significant differences were found between mean values in ruptured and unruptured groups for size ratio, undulation index, nonsphericity index, ellipticity index, aneurysm angle, and aspect ratio. Logistic regression analysis further revealed that size ratio (odds ratio, 1.41; 95% confidence interval, 1.03-1.92) and undulation index (odds ratio, 1.51; 95% confidence interval, 1.08-2.11) had the strongest independent correlation with ruptured IA. From the receiver operating characteristic analysis, size ratio and aneurysm angle had the highest area under the curve values of 0.83 and 0.85, respectively. Size ratio and aneurysm angle are promising new morphological metrics for IA rupture risk assessment. Because these parameters account for vessel geometry, they may bridge the gap between morphological studies and more qualitative location-based studies.
An Examination of Two Procedures for Identifying Consequential Item Parameter Drift
ERIC Educational Resources Information Center
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu
2014-01-01
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
NASA Astrophysics Data System (ADS)
Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.
2008-03-01
We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
Rao, Harsha L; Addepalli, Uday K; Yadav, Ravi K; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S
2014-03-01
To evaluate the effect of scan quality on the diagnostic accuracies of optic nerve head (ONH), retinal nerve fiber layer (RNFL), and ganglion cell complex (GCC) parameters of spectral-domain optical coherence tomography (SD OCT) in glaucoma. Cross-sectional study. Two hundred fifty-two eyes of 183 control subjects (mean deviation [MD]: -1.84 dB) and 207 eyes of 159 glaucoma patients (MD: -7.31 dB) underwent ONH, RNFL, and GCC scanning with SD OCT. Scan quality of SD OCT images was based on signal strength index (SSI) values. Influence of SSI on diagnostic accuracy of SD OCT was evaluated by receiver operating characteristic (ROC) regression. Diagnostic accuracies of all SD OCT parameters were better when the SSI values were higher. This effect was statistically significant (P < .05) for ONH and RNFL but not for GCC parameters. In mild glaucoma (MD of -5 dB), area under ROC curve (AUC) for rim area, average RNFL thickness, and average GCC thickness parameters improved from 0.651, 0.678, and 0.726, respectively, at an SSI value of 30 to 0.873, 0.962, and 0.886, respectively, at an SSI of 70. AUCs of the same parameters in advanced glaucoma (MD of -15 dB) improved from 0.747, 0.890, and 0.873, respectively, at an SSI value of 30 to 0.922, 0.994, and 0.959, respectively, at an SSI of 70. Diagnostic accuracies of SD OCT parameters in glaucoma were significantly influenced by the scan quality even when the SSI values were within the manufacturer-recommended limits. These results should be considered while interpreting the SD OCT scans for glaucoma. Copyright © 2014 Elsevier Inc. All rights reserved.
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
Chaos based encryption system for encrypting electroencephalogram signals.
Lin, Chin-Feng; Shih, Shun-Han; Zhu, Jin-De
2014-05-01
In the paper, we use the Microsoft Visual Studio Development Kit and C# programming language to implement a chaos-based electroencephalogram (EEG) encryption system involving three encryption levels. A chaos logic map, initial value, and bifurcation parameter for the map were used to generate Level I chaos-based EEG encryption bit streams. Two encryption-level parameters were added to these elements to generate Level II chaos-based EEG encryption bit streams. An additional chaotic map and chaotic address index assignment process was used to implement the Level III chaos-based EEG encryption system. Eight 16-channel EEG Vue signals were tested using the encryption system. The encryption was the most rapid and robust in the Level III system. The test yielded superior encryption results, and when the correct deciphering parameter was applied, the EEG signals were completely recovered. However, an input parameter error (e.g., a 0.00001 % initial point error) causes chaotic encryption bit streams, preventing the recovery of 16-channel EEG Vue signals.
A generic hydrological model for a green roof drainage layer.
Vesuviano, Gianni; Stovin, Virginia
2013-01-01
A rainfall simulator of length 5 m and width 1 m was used to supply constant intensity and largely spatially uniform water inflow events to 100 different configurations of commercially available green roof drainage layer and protection mat. The runoff from each inflow event was collected and sampled at one-second intervals. Time-series runoff responses were subsequently produced for each of the tested configurations, using the average response of three repeat tests. Runoff models, based on storage routing (dS/dt = I-Q) and a power-law relationship between storage and runoff (Q = kS(n)), and incorporating a delay parameter, were created. The parameters k, n and delay were optimized to best fit each of the runoff responses individually. The range and pattern of optimized parameter values was analysed with respect to roof and event configuration. An analysis was performed to determine the sensitivity of the shape of the runoff profile to changes in parameter values. There appears to be potential to consolidate values of n by roof slope and drainage component material.
Quasiparticle interference in multiband superconductors with strong coupling
NASA Astrophysics Data System (ADS)
Dutt, A.; Golubov, A. A.; Dolgov, O. V.; Efremov, D. V.
2017-08-01
We develop a theory of the quasiparticle interference (QPI) in multiband superconductors based on the strong-coupling Eliashberg approach within the Born approximation. In the framework of this theory, we study dependencies of the QPI response function in the multiband superconductors with the nodeless s -wave superconductive order parameter. We pay special attention to the difference in the quasiparticle scattering between the bands having the same and opposite signs of the order parameter. We show that at the momentum values close to the momentum transfer between two bands, the energy dependence of the quasiparticle interference response function has three singularities. Two of these correspond to the values of the gap functions and the third one depends on both the gaps and the transfer momentum. We argue that only the singularity near the smallest band gap may be used as a universal tool to distinguish between the s++ and s± order parameters. The robustness of the sign of the response function peak near the smaller gap value, irrespective of the change in parameters, in both the symmetry cases is a promising feature that can be harnessed experimentally.
NASA Astrophysics Data System (ADS)
Algradee, M. A.; Sultan, M.; Samir, O. M.; Alwany, A. Elwhab B.
2017-08-01
The Nd3+-doped lithium-zinc-phosphate glasses were prepared by means of conventional melt quenching method. X-ray diffraction results confirmed the glassy nature of the studied glasses. The physical parameters such as the density, molar volume, ion concentration, polaron radius, inter-ionic distance, field strength and oxygen packing density were calculated using different formulae. The transmittance and reflectance spectra of glasses were recorded in the wavelength range 190-1200 nm. The values of optical band gap and Urbach energy were determined based on Mott-Davis model. The refractive indices for the studied glasses were evaluated from optical band gap values using different methods. The average electronic polarizability of the oxide ions, optical basicity and an interaction parameter were investigated from the calculated values of the refractive index and the optical band gap for the studied glasses. The variations in the different physical and optical properties of glasses with Nd2O3 content were discussed in terms of different parameters such as non-bridging oxygen and different concentrations of Nd cation in glass system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.
2014-03-01
Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.
The review of dynamic monitoring technology for crop growth
NASA Astrophysics Data System (ADS)
Zhang, Hong-wei; Chen, Huai-liang; Zou, Chun-hui; Yu, Wei-dong
2010-10-01
In this paper, crop growth monitoring methods are described elaborately. The crop growth models, Netherlands-Wageningen model system, the United States-GOSSYM model and CERES models, Australia APSIM model and CCSODS model system in China, are introduced here more focus on the theories of mechanism, applications, etc. The methods and application of remote sensing monitoring methods, which based on leaf area index (LAI) and biomass were proposed by different scholars at home and abroad, are highly stressed in the paper. The monitoring methods of remote sensing coupling with crop growth models are talked out at large, including the method of "forced law" which using remote sensing retrieval state parameters as the crop growth model parameters input, and then to enhance the dynamic simulation accuracy of crop growth model and the method of "assimilation of Law" which by reducing the gap difference between the value of remote sensing retrieval and the simulated values of crop growth model and thus to estimate the initial value or parameter values to increasing the simulation accuracy. At last, the developing trend of monitoring methods are proposed based on the advantages and shortcomings in previous studies, it is assured that the combination of remote sensing with moderate resolution data of FY-3A, MODIS, etc., crop growth model, "3S" system and observation in situ are the main methods in refinement of dynamic monitoring and quantitative assessment techniques for crop growth in future.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Small field models with gravitational wave signature supported by CMB data
Brustein, Ramy
2018-01-01
We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608
NASA Astrophysics Data System (ADS)
Bedane, T.; Di Maio, L.; Scarfato, P.; Incarnato, L.; Marra, F.
2015-12-01
The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values of poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedane, T.; Di Maio, L.; Scarfato, P.
The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values ofmore » poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.« less
Estimation of intra-operator variability in perfusion parameter measurements using DCE-US
Gauthier, Marianne; Leguerney, Ingrid; Thalmensi, Jessie; Chebil, Mohamed; Parisot, Sarah; Peronneau, Pierre; Roche, Alain; Lassau, Nathalie
2011-01-01
AIM: To investigate intra-operator variability of semi-quantitative perfusion parameters using dynamic contrast-enhanced ultrasonography (DCE-US), following bolus injections of SonoVue®. METHODS: The in vitro experiments were conducted using three in-house sets up based on pumping a fluid through a phantom placed in a water tank. In the in vivo experiments, B16F10 melanoma cells were xenografted to five nude mice. Both in vitro and in vivo, images were acquired following bolus injections of the ultrasound contrast agent SonoVue® (Bracco, Milan, Italy) and using a Toshiba Aplio® ultrasound scanner connected to a 2.9-5.8 MHz linear transducer (PZT, PLT 604AT probe) (Toshiba, Japan) allowing harmonic imaging (“Vascular Recognition Imaging”) involving linear raw data. A mathematical model based on the dye-dilution theory was developed by the Gustave Roussy Institute, Villejuif, France and used to evaluate seven perfusion parameters from time-intensity curves. Intra-operator variability analyses were based on determining perfusion parameter coefficients of variation (CV). RESULTS: In vitro, different volumes of SonoVue® were tested with the three phantoms: intra-operator variability was found to range from 2.33% to 23.72%. In vivo, experiments were performed on tumor tissues and perfusion parameters exhibited values ranging from 1.48% to 29.97%. In addition, the area under the curve (AUC) and the area under the wash-out (AUWO) were two of the parameters of great interest since throughout in vitro and in vivo experiments their variability was lower than 15.79%. CONCLUSION: AUC and AUWO appear to be the most reliable parameters for assessing tumor perfusion using DCE-US as they exhibited the lowest CV values. PMID:21512654
Estimation of intra-operator variability in perfusion parameter measurements using DCE-US.
Gauthier, Marianne; Leguerney, Ingrid; Thalmensi, Jessie; Chebil, Mohamed; Parisot, Sarah; Peronneau, Pierre; Roche, Alain; Lassau, Nathalie
2011-03-28
To investigate intra-operator variability of semi-quantitative perfusion parameters using dynamic contrast-enhanced ultrasonography (DCE-US), following bolus injections of SonoVue(®). The in vitro experiments were conducted using three in-house sets up based on pumping a fluid through a phantom placed in a water tank. In the in vivo experiments, B16F10 melanoma cells were xenografted to five nude mice. Both in vitro and in vivo, images were acquired following bolus injections of the ultrasound contrast agent SonoVue(®) (Bracco, Milan, Italy) and using a Toshiba Aplio(®) ultrasound scanner connected to a 2.9-5.8 MHz linear transducer (PZT, PLT 604AT probe) (Toshiba, Japan) allowing harmonic imaging ("Vascular Recognition Imaging") involving linear raw data. A mathematical model based on the dye-dilution theory was developed by the Gustave Roussy Institute, Villejuif, France and used to evaluate seven perfusion parameters from time-intensity curves. Intra-operator variability analyses were based on determining perfusion parameter coefficients of variation (CV). In vitro, different volumes of SonoVue(®) were tested with the three phantoms: intra-operator variability was found to range from 2.33% to 23.72%. In vivo, experiments were performed on tumor tissues and perfusion parameters exhibited values ranging from 1.48% to 29.97%. In addition, the area under the curve (AUC) and the area under the wash-out (AUWO) were two of the parameters of great interest since throughout in vitro and in vivo experiments their variability was lower than 15.79%. AUC and AUWO appear to be the most reliable parameters for assessing tumor perfusion using DCE-US as they exhibited the lowest CV values.
Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi
In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic
NASA Astrophysics Data System (ADS)
Haag, T.; Herrmann, J.; Hanss, M.
2010-10-01
For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.
Wada, Yumiko; Furuse, Tamio; Yamada, Ikuko; Masuya, Hiroshi; Kushida, Tomoko; Shibukawa, Yoko; Nakai, Yuji; Kobayashi, Kimio; Kaneda, Hideki; Gondo, Yoichi; Noda, Tetsuo; Shiroishi, Toshihiko; Wakana, Shigeharu
2010-01-01
To establish the cutoff values for screening ENU-induced behavioral mutations, normal variations in mouse behavioral data were examined in home-cage activity (HA), open-field (OF), and passive-avoidance (PA) tests. We defined the normal range as one that included more than 95% of the normal control values. The cutoffs were defined to identify outliers yielding values that deviated from the normal by less than 5% for C57BL/6J, DBA/2J, DBF(1), and N(2) (DXDB) progenies. Cutoff values for G1-phenodeviant (DBF(1)) identification were defined based on values over +/- 3.0 SD from the mean of DBF(1) for all parameters assessed in the HA and OF tests. For the PA test, the cutoff values were defined based on whether the mice met the learning criterion during the 2nd (at a shock intensity of 0.3 mA) or the 3rd (at a shock intensity of 0.15 mA) retention test. For several parameters, the lower outliers were undetectable as the calculated cutoffs were negative values. Based on the cutoff criteria, we identified 275 behavioral phenodeviants among 2,646 G1 progeny. Of these, 64 were crossed with wild-type DBA/2J individuals, and the phenotype transmission was examined in the G2 progeny using the cutoffs defined for N(2) mice. In the G2 mice, we identified 15 novel dominant mutants exhibiting behavioral abnormalities, including hyperactivity in the HA or OF tests, hypoactivity in the OF test, and PA deficits. Genetic and detailed behavioral analysis of these ENU-induced mutants will provide novel insights into the molecular mechanisms underlying behavior.
A program and data base for evaluating SMMR algorithms
NASA Technical Reports Server (NTRS)
1979-01-01
A program (PARAM) is described which enables a user to compare the values of meteorological parameters derived from data obtained by the scanning multichannel microwave radiometer (SMMR) instrument on NIMBUS 7 with surface observations made over the ocean. The input to this program is a data base, also described, which contains the surface observations and coincident SMMR data. The evaluation of meteorological parameters using SMMR data is done by a user supplied subroutine. Instruments are given for executing the program and writing the subroutine.
Earthquake hazard analysis for the different regions in and around Aǧrı
NASA Astrophysics Data System (ADS)
Bayrak, Erdem; Yilmaz, Şeyda; Bayrak, Yusuf
2016-04-01
We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg-Richter magnitude-frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship, from the maximum likelihood method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Nilsson, Ingemar; Polla, Magnus O
2012-10-01
Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2018-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young’s moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YMf) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YMf was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YMf was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations. PMID:29399004
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2016-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young's moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YM f ) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YM f was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YM f was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations.
Comparison of Ionospheric Parameters during Similar Geomagnetic Storms
NASA Astrophysics Data System (ADS)
Blagoveshchensky, D. V.
2018-03-01
The degree of closeness of ionospheric parameters during one magnetic storm and of the same parameters during another, similar, storm is estimated. Overall, four storms—two pairs of storms close in structure and appearance according to recording of the magnetic field X-component—were analyzed. The examination was based on data from Sodankyla observatory (Finland). The f-graphs of the ionospheric vertical sounding, magnetometer data, and riometer data on absorption were used. The main results are as follows. The values of the critical frequencies foF2, foF1, and foE for different but similar magnetic storms differ insignificantly. In the daytime, the difference is on average 6% (from 0 to 11.1%) for all ionospheric layers. In the nighttime conditions, the difference for foF2 is 4%. The nighttime values of foEs differ on average by 20%. These estimates potentially make it possible to forecast ionospheric parameters for a particular storm.
Analysis of Mathematical Modelling on Potentiometric Biosensors
Mehala, N.; Rajendran, L.
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories. PMID:25969765
Analysis of mathematical modelling on potentiometric biosensors.
Mehala, N; Rajendran, L
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.
Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C
2014-09-01
Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait 'total crop nitrogen uptake' (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10-36 % more yield than those based on markers for yield per se. This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Stergiopoulos, Ch.; Stavrakas, I.; Triantis, D.; Vallianatos, F.; Stonham, J.
2015-02-01
Weak electric signals termed as 'Pressure Stimulated Currents, PSC' are generated and detected while cement based materials are found under mechanical load, related to the creation of cracks and the consequent evolution of cracks' network in the bulk of the specimen. During the experiment a set of cement mortar beams of rectangular cross-section were subjected to Three-Point Bending (3PB). For each one of the specimens an abrupt mechanical load step was applied, increased from the low load level (Lo) to a high final value (Lh) , where Lh was different for each specimen and it was maintained constant for long time. The temporal behavior of the recorded PSC show that during the load increase a spike-like PSC emission was recorded and consequently a relaxation of the PSC, after reaching its final value, follows. The relaxation process of the PSC was studied using non-extensive statistical physics (NESP) based on Tsallis entropy equation. The behavior of the Tsallis q parameter was studied in relaxation PSCs in order to investigate its potential use as an index for monitoring the crack evolution process with a potential use in non-destructive laboratory testing of cement-based specimens of unknown internal damage level. The dependence of the q-parameter on the Lh (when Lh <0.8Lf), where Lf represents the 3PB strength of the specimen, shows an increase on the q value when the specimens are subjected to gradually higher bending loadings and reaches a maximum value close to 1.4 when the applied Lh becomes higher than 0.8Lf. While the applied Lh becomes higher than 0.9Lf the value of the q-parameter gradually decreases. This analysis of the experimental data manifests that the value of the entropic index q obtains a characteristic decrease while reaching the ultimate strength of the specimen, and thus could be used as a forerunner of the expected failure.
Gao, Jie; Zhu, Peiyong; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar
2017-02-01
In this paper, finite-time synchronization (FTS) of memristor-based recurrent neural networks (MNNs) with time-varying delays is investigated by designing a new switching controller. First, by using the differential inclusions theory and set-valued maps, sufficient conditions to ensure FTS of MNNs are obtained under the two cases of 0<α<1 and α=0, and it is derived that α=0 is the critical value of 0<α<1. Next, it is discussed deeply on the relation between the parameter α and the synchronization time. Then, a new controller with a switching parameter α is designed which can shorten the synchronization time. Finally, some numerical simulation examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Agreement in cardiovascular risk rating based on anthropometric parameters
Dantas, Endilly Maria da Silva; Pinto, Cristiane Jordânia; Freitas, Rodrigo Pegado de Abreu; de Medeiros, Anna Cecília Queiroz
2015-01-01
Objective To investigate the agreement in evaluation of risk of developing cardiovascular diseases based on anthropometric parameters in young adults. Methods The study included 406 students, measuring weight, height, and waist and neck circumferences. Waist-to-height ratio and the conicity index. The kappa coefficient was used to assess agreement in risk classification for cardiovascular diseases. The positive and negative specific agreement values were calculated as well. The Pearson chi-square (χ2) test was used to assess associations between categorical variables (p<0.05). Results The majority of the parameters assessed (44%) showed slight (k=0.21 to 0.40) and/or poor agreement (k<0.20), with low values of negative specific agreement. The best agreement was observed between waist circumference and waist-to-height ratio both for the general population (k=0.88) and between sexes (k=0.93 to 0.86). There was a significant association (p<0.001) between the risk of cardiovascular diseases and females when using waist circumference and conicity index, and with males when using neck circumference. This resulted in a wide variation in the prevalence of cardiovascular disease risk (5.5%-36.5%), depending on the parameter and the sex that was assessed. Conclusion The results indicate variability in agreement in assessing risk for cardiovascular diseases, based on anthropometric parameters, and which also seems to be influenced by sex. Further studies in the Brazilian population are required to better understand this issue. PMID:26466060
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Praseptiangga, D.; Utami, R.; Khasanah, L. U.; Evirananda, I. P.; Kawiji
2017-02-01
Edible films and coatings have emerged as an alternative packaging in food applications and have received much attention due to their advantages. The incorporation of essential oils in film matrices to give antimicrobial properties had been observed recently, and could be used as promising preservation technology. In this study, cassava starch-based edible coating incorporated with lemongrass essential oil (1%) was applied by spraying and dipping methods to preserve papaya MJ9 during storage at room temperature. The quality of papaya MJ9 was analyzed based on its physicochemical and microbiological properties. The addition of lemongrass essential oil (1%) significantly inhibited the microbial growth on papaya MJ9 by reducing the value of total yeast and mold as compared to the control. This study also showed that for parameters of weight loss, total soluble solid, vitamin C, and total titratable acid, papaya MJ9 with cassava starch-based edible coating incorporated with lemongrass essential oil (1%) had the lower values than control, however, they had the higher value than control on firmness parameter. These results indicate that cassava starch-based edible coating incorporated with lemongrass essential oil (1%) can be used as an alternative preservation for papaya MJ9.
A Comparative Evaluation of Anomaly Detection Algorithms for Maritime Video Surveillance
2011-01-01
of k-means clustering and the k- NN Localized p-value Estimator ( KNN -LPE). K-means is a popular distance-based clustering algorithm while KNN -LPE...implemented the sparse cluster identification rule we described in Section 3.1. 2. k-NN Localized p-value Estimator ( KNN -LPE): We implemented this using...Average Density ( KNN -NAD): This was implemented as described in Section 3.4. Algorithm Parameter Settings The global and local density-based anomaly
Comparison of the color of natural teeth measured by a colorimeter and Shade Vision System.
Cho, Byeong-Hoon; Lim, Yong-Kyu; Lee, Yong-Keun
2007-10-01
The objectives were to measure the difference in the color and color parameters of natural teeth measured by a tristimulus colorimeter (CM, used as a reference) and Shade Vision System (SV), and to determine the influence of color parameters on the color difference between the values measured by two instruments. Color of 12 maxillary and mandibular anterior teeth was measured by CM and SV for 47 volunteers (number of teeth=564). Color parameters such as CIE L*, a* and b* values, chroma and hue angle measured by two instruments were compared. Chroma was calculated as C*ab=(a*2 = b*2)1/2, and hue angle was calculated as h degrees =arctan(b*/a*). The influence of color parameters measured by CM on the color difference (DeltaE*(ab)) between the values measured by two instruments was analyzed with multiple regression analysis (alpha=0.01). Mean DeltaE*(ab) value between the values measured by two instruments was 21.7 (+/-3.7), and the mean difference in lightness (CIE L*) and chroma was 16.2 (+/-3.9) and 13.2 (+/-3.0), respectively. Difference in hue angle was high as 132.7 (+/-53.3) degrees . Except for the hue angle, all the color parameters showed significant correlations and the coefficient of determination (r(2)) was in the range of 0.089-0.478. Based on multiple regression analysis, the standardized partial correlation coefficient (beta) of the included predictors for the color difference was -0.710 for CIE L* and -0.300 for C*(ab) (p<0.01). All the color parameters showed significant but weak correlations except for hue angle. When lightness and chroma of teeth were high, color difference between the values measured by two instruments was small. Clinical accuracy of two instruments should be investigated further.
Baseline hematology and serum biochemistry results for Indian leopards (Panthera pardus fusca)
Shanmugam, Arun Attur; Muliya, Sanath Krishna; Deshmukh, Ajay; Suresh, Sujay; Nath, Anukul; Kalaignan, Pa; Venkataravanappa, Manjunath; Jose, Lyju
2017-01-01
Aim: The aim of the study was to establish the baseline hematology and serum biochemistry values for Indian leopards (Panthera pardus fusca), and to assess the possible variations in these parameters based on age and gender. Materials and Methods: Hemato-biochemical test reports from a total of 83 healthy leopards, carried out as part of routine health evaluation in Bannerghatta Biological Park and Manikdoh Leopard Rescue Center, were used to establish baseline hematology and serum biochemistry parameters for the subspecies. The hematological parameters considered for the analysis included hemoglobin (Hb), packed cell volume, total erythrocyte count (TEC), total leukocyte count (TLC), mean corpuscular volume (MCV), mean corpuscular Hb (MCH), and MCH concentration. The serum biochemistry parameters considered included total protein (TP), albumin, globulin, aspartate aminotransferase, alanine aminotransferase (ALT), blood urea nitrogen, creatinine, triglycerides, calcium, and phosphorus. Results: Even though few differences were observed in hematologic and biochemistry values between male and female Indian leopards, the differences were statistically not significant. Effects of age, however, were evident in relation to many hematologic and biochemical parameters. Sub-adults had significantly greater values for Hb, TEC, and TLC compared to adults and geriatric group, whereas they had significantly lower MCV and MCH compared to adults and geriatric group. Among, serum biochemistry parameters the sub-adult age group was observed to have significantly lower values for TP and ALT than adult and geriatric leopards. Conclusion: The study provides a comprehensive analysis of hematologic and biochemical parameters for Indian leopards. Baselines established here will permit better captive management of the subspecies, serve as a guide to assess the health and physiological status of the free ranging leopards, and may contribute valuable information for making effective management decisions during translocation or rehabilitation process. PMID:28831229
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
Modelling of intermittent microwave convective drying: parameter sensitivity
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Roberts, Cynthia J; Mahmoud, Ashraf M; Bons, Jeffrey P; Hossain, Arif; Elsheikh, Ahmed; Vinciguerra, Riccardo; Vinciguerra, Paolo; Ambrósio, Renato
2017-04-01
To investigate two new stiffness parameters and their relationships with the dynamic corneal response (DCR) parameters and compare normal and keratoconic eyes. Stiffness parameters are defined as Resultant Pressure at inward applanation (A1) divided by corneal displacement. Stiffness parameter A1 uses displacement between the undeformed cornea and A1 and stiffness parameter highest concavity (HC) uses displacement from A1 to maximum deflection during HC. The spatial and temporal profiles of the Corvis ST (Oculus Optikgeräte, Wetzlar, Germany) air puff were characterized using hot wire anemometry. An adjusted air pressure impinging on the cornea at A1 (adjAP1) and an algorithm to biomechanically correct intraocular pressure based on finite element modelling (bIOP) were used for Resultant Pressure calculation (adjAP1 - bIOP). Linear regression analyses between DCR parameters and stiffness parameters were performed on a retrospective dataset of 180 keratoconic eyes and 482 normal eyes. DCR parameters from a subset of 158 eyes of 158 patients in each group were matched for bIOP and compared using t tests. A P value of less than .05 was considered statistically significant. All DCR parameters evaluated showed significant differences between normal and keratoconic eyes, except peak distance. Keratoconic eyes had lower stiffness parameter values, thinner pachymetry, shorter applanation lengths, greater absolute values of applanation velocities, earlier A1 times and later second applanation times, greater HC deformation amplitudes and HC deflection amplitudes, and lower HC radius of concave curvature (greater concave curvature). Most DCR parameters showed a significant relationship with both stiffness parameters in both groups. Keratoconic eyes demonstrated less resistance to deformation than normal eyes with similar IOP. The stiffness parameters may be useful in future biomechanical studies as potential biomarkers. [J Refract Surg. 2017;33(4):266-273.]. Copyright 2017, SLACK Incorporated.
Spatial parameters of walking gait and footedness.
Zverev, Y P
2006-01-01
The present study was undertaken to assess whether footedness has effects on selected spatial and angular parameters of able-bodied gait by evaluating footprints of young adults. A total of 112 males and 93 females were selected from among students and staff members of the University of Malawi using a simple random sampling method. Footedness of subjects was assessed by the Waterloo Footedness Questionnaire Revised. Gait at natural speed was recorded using the footprint method. The following spatial parameters of gait were derived from the inked footprint sequences of subjects: step and stride lengths, gait angle and base of gait. The anthropometric measurements taken were weight, height, leg and foot length, foot breadth, shoulder width, and hip and waist circumferences. The prevalence of right-, left- and mix-footedness in the whole sample of young Malawian adults was 81%, 8.3% and 10.7%, respectively. One-way analysis of variance did not reveal a statistically significant difference between footedness categories in the mean values of anthropometric measurements (p > 0.05 for all variables). Gender differences in step and stride length values were not statistically significant. Correction of these variables for stature did not change the trend. Males had significantly broader steps than females. Normalized values of base of gait had similar gender difference. The group means of step length and normalized step length of the right and left feet were similar, for males and females. There was a significant side difference in the gait angle in both gender groups of volunteers with higher mean values on the left side compared to the right one (t = 2.64, p < 0.05 for males, and t = 2.78, p < 0.05 for females). One-way analysis of variance did not demonstrate significant difference between footedness categories in the mean values of step length, gait angle, bilateral differences in step length and gait angle, stride length, gait base and normalized gait variables of male and female volunteers (p > 0.05 for all variables). The present study demonstrated that footedness does not affect spatial and angular parameters of walking gait.
A RSSI-based parameter tracking strategy for constrained position localization
NASA Astrophysics Data System (ADS)
Du, Jinze; Diouris, Jean-François; Wang, Yide
2017-12-01
In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less
Tixier, Florent; Hatt, Mathieu; Le Rest, Catherine Cheze; Le Pogam, Adrien; Corcos, Laurent; Visvikis, Dimitris
2012-05-01
(18)F-FDG PET measurement of standardized uptake value (SUV) is increasingly used for monitoring therapy response and predicting outcome. Alternative parameters computed through textural analysis were recently proposed to quantify the heterogeneity of tracer uptake by tumors as a significant predictor of response. The primary objective of this study was to evaluate the reproducibility of these heterogeneity measurements. Double baseline (18)F-FDG PET scans were acquired within 4 d of each other for 16 patients before any treatment was considered. A Bland-Altman analysis was performed on 8 parameters based on histogram measurements and 17 parameters based on textural heterogeneity features after discretization with values between 8 and 128. The reproducibility of maximum and mean SUV was similar to that in previously reported studies, with a mean percentage difference of 4.7% ± 19.5% and 5.5% ± 21.2%, respectively. By comparison, better reproducibility was measured for some textural features describing local heterogeneity of tracer uptake, such as entropy and homogeneity, with a mean percentage difference of -2% ± 5.4% and 1.8% ± 11.5%, respectively. Several regional heterogeneity parameters such as variability in the intensity and size of regions of homogeneous activity distribution had reproducibility similar to that of SUV measurements, with 95% confidence intervals of -22.5% to 3.1% and -1.1% to 23.5%, respectively. These parameters were largely insensitive to the discretization range. Several parameters derived from textural analysis describing heterogeneity of tracer uptake by tumors on local and regional scales had reproducibility similar to or better than that of simple SUV measurements. These reproducibility results suggest that these (18)F-FDG PET-derived parameters, which have already been shown to have predictive and prognostic value in certain cancer models, may be used to monitor therapy response and predict patient outcome.
NASA Astrophysics Data System (ADS)
Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.
2005-05-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Energy spectra and E2 transition rates of 124—130Ba
NASA Astrophysics Data System (ADS)
Sabri, H.; Seidi, M.
2016-10-01
In this paper, we have studied the energy spectra and B(E2) values of 124—130Ba isotopes in the shape phase transition region between the spherical and gamma unstable deformed shapes. We have used a transitional interacting Boson model (IBM), Hamiltonian which is based on affine SU(1,1) Lie algebra in the both IBM-1 and 2 versions and also the Catastrophe theory in combination with a coherent state formalism to generate energy surfaces and determine the exact values of control parameters. Our results for control parameters suggest a combination of U(5) and SO(6) dynamical symmetries in this isotopic chain. Also, the theoretical predictions can be rather well reproduce the experimental counterparts, when the control parameter is approached to the SO(6) limit.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
The relative pose estimation of aircraft based on contour model
NASA Astrophysics Data System (ADS)
Fu, Tai; Sun, Xiangyi
2017-02-01
This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
Design of a Sixteen Bit Pipelined Adder Using CMOS Bulk P-Well Technology.
1984-12-01
node’s current value. These rules are based on the assumption that the event that was last calculated reflects the latest configuraticn of the network...Lines beginning with - are treated as ll comment. The parameter names and their default values are: ;configuration file for ’standard’ MPC procem capm .2a
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Modulation indices for volumetric modulated arc therapy.
Park, Jong Min; Park, So-Yeon; Kim, Hyoungnyoun; Kim, Jin Ho; Carlson, Joel; Ye, Sung-Joon
2014-12-07
The aim of this study is to present a modulation index (MI) for volumetric modulated arc therapy (VMAT) based on the speed and acceleration analysis of modulating-parameters such as multi-leaf collimator (MLC) movements, gantry rotation and dose-rate, comprehensively. The performance of the presented MI (MIt) was evaluated with correlation analyses to the pre-treatment quality assurance (QA) results, differences in modulating-parameters between VMAT plans versus dynamic log files, and differences in dose-volumetric parameters between VMAT plans versus reconstructed plans using dynamic log files. For comparison, the same correlation analyses were performed for the previously suggested modulation complexity score (MCS(v)), leaf travel modulation complexity score (LTMCS) and MI by Li and Xing (MI Li&Xing). In the two-tailed unpaired parameter condition, p values were acquired. The Spearman's rho (r(s)) values of MIt, MCSv, LTMCS and MI Li&Xing to the local gamma passing rate with 2%/2 mm criterion were -0.658 (p < 0.001), 0.186 (p = 0.251), 0.312 (p = 0.05) and -0.455 (p = 0.003), respectively. The values of rs to the modulating-parameter (MLC positions) differences were 0.917, -0.635, -0.857 and 0.795, respectively (p < 0.001). For dose-volumetric parameters, MIt showed higher statistically significant correlations than the conventional MIs. The MIt showed good performance for the evaluation of the modulation-degree of VMAT plans.
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
Mobasheri, Nasrin; Karimi, Mehrdad; Hamedi, Javad
2018-06-05
New methods to determine antimicrobial susceptibility of bacterial pathogens especially the minimum inhibitory concentration (MIC) of antibiotics have great importance in pharmaceutical industry and treatment procedures. In the present study, the MIC of several antibiotics was determined against some pathogenic bacteria using macrodilution test. In order to accelerate and increase the efficiency of culture-based method to determine antimicrobial susceptibility, the possible relationship between the changes in some physico-chemical parameters including conductivity, electrical potential difference (EPD), pH and total number of test strains was investigated during the logarithmic phase of bacterial growth in presence of antibiotics. The correlation between changes in these physico-chemical parameters and growth of bacteria was statistically evaluated using linear and non-linear regression models. Finally, the calculated MIC values in new proposed method were compared with the MIC derived from macrodilution test. The results represent significant association between the changes in EPD and pH values and growth of the tested bacteria during the exponential phase of bacterial growth. It has been assumed that the proliferation of bacteria can cause the significant changes in EPD values. The MIC values in both conventional and new method were consistent to each other. In conclusion, cost and time effective antimicrobial susceptibility test can be developed based on monitoring the changes in EPD values. The new proposed strategy also can be used in high throughput screening of biocompounds for their antimicrobial activity in a relatively shorter time (6-8 h) in comparison with the conventional methods.
NASA Astrophysics Data System (ADS)
Peng, Juan; Zhang, Li; Zhang, Kecheng; Ma, Junxian
2018-07-01
Based on the Rytov approximation theory, the transmission model of an orbital angular momentum (OAM)-carrying partially coherent Bessel-Gaussian (BG) beams propagating in weak anisotropic turbulence is established. The corresponding analytical expression of channel capacity is presented. Influences of anisotropic turbulence parameters and beam parameters on channel capacity of OAM-based free-space optical (FSO) communication systems are discussed in detail. The results indicate channel capacity increases with increasing of almost all of the parameters except for transmission distance. Raising the values of some parameters such as wavelength, propagation altitude and non-Kolmogorov power spectrum index, would markedly improve the channel capacity. In addition, we evaluate the channel capacity of Laguerre-Gaussian (LG) beams and partially coherent BG beams in anisotropic turbulence. It indicates that partially coherent BG beams are better light sources candidates for mitigating the influences of anisotropic turbulence on channel capacity of OAM-based FSO communication systems.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
Regression without truth with Markov chain Monte-Carlo
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2017-03-01
Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
NASA Astrophysics Data System (ADS)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
Nakajima, Kenichi; Matsumoto, Naoya; Kasai, Tokuo; Matsuo, Shinro; Kiso, Keisuke; Okuda, Koichi
2016-04-01
As a 2-year project of the Japanese Society of Nuclear Medicine working group activity, normal myocardial imaging databases were accumulated and summarized. Stress-rest with gated and non-gated image sets were accumulated for myocardial perfusion imaging and could be used for perfusion defect scoring and normal left ventricular (LV) function analysis. For single-photon emission computed tomography (SPECT) with multi-focal collimator design, databases of supine and prone positions and computed tomography (CT)-based attenuation correction were created. The CT-based correction provided similar perfusion patterns between genders. In phase analysis of gated myocardial perfusion SPECT, a new approach for analyzing dyssynchrony, normal ranges of parameters for phase bandwidth, standard deviation and entropy were determined in four software programs. Although the results were not interchangeable, dependency on gender, ejection fraction and volumes were common characteristics of these parameters. Standardization of (123)I-MIBG sympathetic imaging was performed regarding heart-to-mediastinum ratio (HMR) using a calibration phantom method. The HMRs from any collimator types could be converted to the value with medium-energy comparable collimators. Appropriate quantification based on common normal databases and standard technology could play a pivotal role for clinical practice and researches.
2016-01-27
bias of the estimator U, bias(U), the difference between this estimator’s expected value and the true value of the parameter being estimated, i.e...biasðUÞ ¼ EðU yÞ ¼ EðUÞ y ð9Þ Based on the above definition, an unbiased estimator is one whose expected value is equal to the true value being...equal to 0.94 (p- value < 0.05), if we con- sider the pure ER network model as our baseline, and 0.31 (p- value < 0.05), if we control for the home
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-01-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
Zhou, Shiqi
2006-06-01
A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Analysis of glottal source parameters in Parkinsonian speech.
Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry
2016-08-01
Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils
NASA Astrophysics Data System (ADS)
González, A.; Gomez-Iradi, S.; Munduate, X.
2014-06-01
From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.
A neural network based methodology to predict site-specific spectral acceleration values
NASA Astrophysics Data System (ADS)
Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.
2010-12-01
A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.
2015-01-01
Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2015-09-15
Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.
Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N
2018-02-01
Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.
Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4
NASA Astrophysics Data System (ADS)
Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.
2013-12-01
Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).
A Stokes drift approximation based on the Phillips spectrum
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.
2016-04-01
A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.
NASA Astrophysics Data System (ADS)
Saputra, Eka; Tjahjaningsih, Wahju; Patmawati
2017-02-01
Fresh fish shelf life can be extended by adding antibacterial compounds such as synthetic chemicals or natural materials. One of the natural ingredients that are safe to use to prolong the freshness of the fish is chitosan. Chitosan is able to provide quality deterioration inhibitory effect of fillet of tilapia. The rate of decline in the value of organoleptic fillet of tilapia treated chitosan solution is slower when compared to no treatment tilapia fillet chitosan solution. In the organoleptic test until the 18 hours of storage, 2% chitosan solution capable of maintaining the highest organoleptic value for the parameter sightings meat, texture, and smell fillet. The use of 2% chitosan solution provided the best results based on the parameters of the appearance of meat, the texture, the smell, the pH value and the value of TVB fillet.
NASA Astrophysics Data System (ADS)
Whittaker, Ian C.; Sembay, Steve
2016-07-01
Solar wind charge exchange occurs at Earth between the neutral planetary exosphere and highly charged ions of the solar wind. The main challenge in predicting the resultant photon flux in the X-ray energy bands is due to the interaction efficiency, known as the α value. This study produces experimental α values at the Earth, for oxygen emission in the range of 0.5-0.7 keV. Thirteen years of data from the Advanced Composition Explorer are examined, comparing O7+ and O8+ abundances, as well as O/H to other solar wind parameters allowing all parameters in the αO7,8+ calculation to be estimated based on solar wind velocity. Finally, a table is produced for a range of solar wind speeds giving average O7+ and O8+ abundances, O/H, and αO7,8+ values.
Method for computing self-consistent solution in a gun code
Nelson, Eric M
2014-09-23
Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
NASA Astrophysics Data System (ADS)
Puangjaktha, Prayot; Pailoplee, Santi
2018-01-01
To study the prospective areas of upcoming strong-to-major earthquakes, i.e., M w ≥ 6.0, a catalog of seismicity in the vicinity of the Thailand-Laos-Myanmar border region was generated and then investigated statistically. Based on the successful investigations of previous works, the seismicity rate change (Z value) technique was applied in this study. According to the completeness earthquake dataset, eight available case studies of strong-to-major earthquakes were investigated retrospectively. After iterative tests of the characteristic parameters concerning the number of earthquakes ( N) and time window ( T w ), the values of 50 and 1.2 years, respectively, were found to reveal an anomalous high Z-value peak (seismic quiescence) prior to the occurrence of six out of the eight major earthquake events studied. In addition, the location of the Z-value anomalies conformed fairly well to the epicenters of those earthquakes. Based on the investigation of correlation coefficient and the stochastic test of the Z values, the parameters used here ( N = 50 events and T w = 1.2 years) were suitable to determine the precursory Z value and not random phenomena. The Z values of this study and the frequency-magnitude distribution b values of a previous work both highlighted the same prospective areas that might generate an upcoming major earthquake: (i) some areas in the northern part of Laos and (ii) the eastern part of Myanmar.
NASA Astrophysics Data System (ADS)
Hassani, B.; Atkinson, G. M.
2015-12-01
One of the most important issues in developing accurate ground-motion prediction equations (GMPEs) is the effective use of limited regional site information in developing a site effects model. In modern empirical GMPE models site effects are usually characterized by simplified parameters that describe the overall near-surface effects on input ground-motion shaking. The most common site effects parameter is the time-averaged shear-wave velocity in the upper 30 m (VS30), which has been used in the Next Generation Attenuation-West (NGA-West) and NGA-East GMPEs, and is widely used in building code applications. For the NGA-East GMPE database, only 6% of the stations have measured VS30 values, while the rest have proxy-based VS30 values. Proxy-based VS30 values are derived from a weighted average of different proxies' estimates such as topographic slope and surface geology proxies. For the proxy-based approaches, the uncertainty in the estimation of Vs30 is significantly higher (~0.25, log10 units) than that for stations with measured VS30(0.04, log10 units); this translates into error in site amplification and hence increased ground motion variability. We introduce a new VS30 proxy as a function of the site fundamental frequency (fpeak) using the NGA-East database, and show that fpeak is a particularly effective proxy for sites in central and eastern North America We first use horizontal to vertical spectra ratios (H/V) of 5%-damped pseudo spectral acceleration (PSA) to find the fpeak values for the recording stations. We develop an fpeak-based VS30 proxy by correlating the measured VS30 values with the corresponding fpeak value. The uncertainty of the VS30 estimate using the fpeak-based model is much lower (0.14, log10 units) than that for the proxy-based methods used in the NGA-East database (0.25 log10 units). The results of this study can be used to recalculate the VS30 values more accurately for stations with known fpeak values (23% of the stations), and potentially reduce the overall variability of the developed NGA-East GMPE models.
Earthquake hazard analysis for the different regions in and around Ağrı
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg–Richter magnitude–frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency–magnitude Gutenberg–Richter relationship, from the maximum likelihoodmore » method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.« less
Suspension parameter estimation in the frequency domain using a matrix inversion approach
NASA Astrophysics Data System (ADS)
Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.
2011-12-01
The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.
Hwang, Jusun; Gottdenker, Nicole; Min, Mi-Sook; Lee, Hang; Chun, Myung-Sun
2016-06-01
In this study, we evaluated the potential association between the habitat types of feral cats and the prevalence of selected infectious pathogens and health status based on a set of blood parameters. We live-trapped 72 feral cats from two different habitat types: an urban area (n = 48) and a rural agricultural area (n = 24). We compared blood values and the prevalence of feline immunodeficiency virus (FIV), feline leukaemia virus (FeLV) and haemotropic Mycoplasma infection in feral cats from the two contrasting habitats. Significant differences were observed in several blood values (haematocrit, red blood cells, blood urea nitrogen, creatinine) depending on the habitat type and/or sex of the cat. Two individuals from the urban area were seropositive for FIV (3.0%), and eight (12.1%) were positive for FeLV infection (five from an urban habitat and three from a rural habitat). Haemoplasma infection was more common. Based on molecular analysis, 38 cats (54.3%) were positive for haemoplasma, with a significantly higher infection rate in cats from rural habitats (70.8%) compared with urban cats (47.8%). Our study recorded haematological and serum biochemical values, and prevalence of selected pathogens in feral cat populations from two different habitat types. A subset of important laboratory parameters from rural cats showed values under or above the corresponding reference intervals for healthy domestic cats, suggesting potential differences in the health status of feral cats depending on the habitat type. Our findings provide information about the association between 1) blood values (haematological and serum biochemistry parameters) and 2) prevalence of selected pathogen infections and different habitat types; this may be important for veterinarians who work with feral and/or stray cats and for overall cat welfare management. © ISFM and AAFP 2015.
Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S
2006-09-01
Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.
Rackes, A; Ben-David, T; Waring, M S
2018-07-01
This article presents an outcome-based ventilation (OBV) framework, which combines competing ventilation impacts into a monetized loss function ($/occ/h) used to inform ventilation rate decisions. The OBV framework, developed for U.S. offices, considers six outcomes of increasing ventilation: profitable outcomes realized from improvements in occupant work performance and sick leave absenteeism; health outcomes from occupant exposure to outdoor fine particles and ozone; and energy outcomes from electricity and natural gas usage. We used the literature to set low, medium, and high reference values for OBV loss function parameters, and evaluated the framework and outcome-based ventilation rates using a simulated U.S. office stock dataset and a case study in New York City. With parameters for all outcomes set at medium values derived from literature-based central estimates, higher ventilation rates' profitable benefits dominated negative health and energy impacts, and the OBV framework suggested ventilation should be ≥45 L/s/occ, much higher than the baseline ~8.5 L/s/occ rate prescribed by ASHRAE 62.1. Only when combining very low parameter estimates for profitable impacts with very high ones for health and energy impacts were all outcomes on the same order. Even then, however, outcome-based ventilation rates were often twice the baseline rate or more. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
Na, Min Kyun; Won, Yu Deok; Kim, Choong Hyun; Kim, Jae Min; Cheong, Jin Hwan; Ryu, Je Il; Han, Myung-Hoon
2017-01-01
Hydrocephalus is a frequent complication following subarachnoid hemorrhage. Few studies investigated the association between laboratory parameters and shunt-dependent hydrocephalus. This study aimed to investigate the variations of laboratory parameters after subarachnoid hemorrhage. We also attempted to identify predictive laboratory parameters for shunt-dependent hydrocephalus. Multiple imputation was performed to fill the missing laboratory data using Bayesian methods in SPSS. We used univariate and multivariate Cox regression analyses to calculate hazard ratios for shunt-dependent hydrocephalus based on clinical and laboratory factors. The area under the receiver operating characteristic curve was used to determine the laboratory risk values predicting shunt-dependent hydrocephalus. We included 181 participants with a mean age of 54.4 years. Higher sodium (hazard ratio, 1.53; 95% confidence interval, 1.13-2.07; p = 0.005), lower potassium, and higher glucose levels were associated with higher shunt-dependent hydrocephalus. The receiver operating characteristic curve analysis showed that the areas under the curve of sodium, potassium, and glucose were 0.649 (cutoff value, 142.75 mEq/L), 0.609 (cutoff value, 3.04 mmol/L), and 0.664 (cutoff value, 140.51 mg/dL), respectively. Despite the exploratory nature of this study, we found that higher sodium, lower potassium, and higher glucose levels were predictive values for shunt-dependent hydrocephalus from postoperative day (POD) 1 to POD 12-16 after subarachnoid hemorrhage. Strict correction of electrolyte imbalance seems necessary to reduce shunt-dependent hydrocephalus. Further large studies are warranted to confirm our findings.
Landslide susceptibility estimations in the Gerecse hills (Hungary).
NASA Astrophysics Data System (ADS)
Gerzsenyi, Dávid; Gáspár, Albert
2017-04-01
Surface movement processes are constantly posing threat to property in populated and agricultural areas in the Gerecse hills (Hungary). The affected geological formations are mainly unconsolidated sediments. Pleistocene loess and alluvial terrace sediments are overwhelmingly present, but fluvio-lacustrine sediments of the latest Miocene, and consolidated Eocene and Mesozoic limestones and marls can also be found in the area. Landslides and other surface movement processes are being studied for a long time in the area, but a comprehensive GIS-based geostatistical analysis have not yet been made for the whole area. This was the reason for choosing the Gerecse as the focus area of the study. However, the base data of our study are freely accessible from online servers, so the used method can be applied to other regions in Hungary. Qualitative data was acquired from the landslide-inventory map of the Hungarian Surface Movement Survey and from the Geological Map of Hungary (1 : 100 000). Morphometric parameters derived from the SRMT-1 DEM were used as quantitative variables. Using these parameters the distribution of elevation, slope gradient, aspect and categorized geological features were computed, both for areas affected and not affected by slope movements. Then likelihood values were computed for each parameters by comparing their distribution in the two areas. With combining the likelihood values of the four parameters relative hazard values were computed for each cell. This method is known as the "empirical probability estimation" originally published by Chung (2005). The map created this way shows each cell's place in their ranking based on the relative hazard values as a percentage for the whole study area (787 km2). These values provide information about how similar is a certain area to the areas already affected by landslides based on the four predictor variables. This map can also serve as a base for more complex landslide vulnerability studies involving economic factors. The landslide-inventory database used in the research provides information regarding the state of activity of the past surface movements, however the activity of many sites are stated as unknown. A complementary field survey have been carried out aiming to categorize these areas - near to Dunaszentmiklós and Neszmély villages - in one of the most landslide-affected part of the Gerecse. Reference: Chung, C. (2005). Using likelihood ratio functions for modeling the conditional probability of occurrence of future landslides for risk assessment. Computers & Geosciences, 32., pp. 1052-1068.
Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad
2014-07-01
Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.
NASA Astrophysics Data System (ADS)
Warchoł, Piotr
2018-06-01
The public transportation system of Cuernavaca, Mexico, exhibits random matrix theory statistics. In particular, the fluctuation of times between the arrival of buses on a given bus stop, follows the Wigner surmise for the Gaussian unitary ensemble. To model this, we propose an agent-based approach in which each bus driver tries to optimize his arrival time to the next stop with respect to an estimated arrival time of his predecessor. We choose a particular form of the associated utility function and recover the appropriate distribution in numerical experiments for a certain value of the only parameter of the model. We then investigate whether this value of the parameter is otherwise distinguished within an information theoretic approach and give numerical evidence that indeed it is associated with a minimum of averaged pairwise mutual information.
NASA Astrophysics Data System (ADS)
Sayyed, M. I.; Lakshminarayana, G.; Kityk, I. V.; Mahdi, M. A.
2017-10-01
In this work, we have evaluated the γ-ray shielding parameters such as mass attenuation coefficient (μ/ρ), effective atomic number (Zeff), half value layer (HVL), mean free path (MFP) and exposure buildup factors (EBF) for heavy metal fluoride (PbF2) based tellurite-rich glasses. In addition, neutron total macroscopic cross sections (∑R) for these glasses were also calculated. The maximum value for μ/ρ, Zeff and ∑R was found for heavy metal (Bi2O3) oxide introduced glass. The results of the selected glasses have been compared, in terms of MFP with different glass systems. The shielding effectiveness of the selected glasses is found comparable or better than of common ones, which indicates that these glasses with suitable oxides could be developed for gamma ray shielding applications.
NASA Astrophysics Data System (ADS)
Li, Wei-Yi; Zhang, Qi-Chang; Wang, Wei
2010-06-01
Based on the Silnikov criterion, this paper studies a chaotic system of cubic polynomial ordinary differential equations in three dimensions. Using the Cardano formula, it obtains the exact range of the value of the parameter corresponding to chaos by means of the centre manifold theory and the method of multiple scales combined with Floque theory. By calculating the manifold near the equilibrium point, the series expression of the homoclinic orbit is also obtained. The space trajectory and Lyapunov exponent are investigated via numerical simulation, which shows that there is a route to chaos through period-doubling bifurcation and that chaotic attractors exist in the system. The results obtained here mean that chaos occurred in the exact range given in this paper. Numerical simulations also verify the analytical results.
An Empirical Calibration of the Mixing-Length Parameter α
NASA Astrophysics Data System (ADS)
Ferraro, Francesco R.; Valenti, Elena; Straniero, Oscar; Origlia, Livia
2006-05-01
We present an empirical calibration of the mixing-length free parameter α based on a homogeneous infrared database of 28 Galactic globular clusters spanning a wide metallicity range (-2.15<[Fe/H]<-0.2). Empirical estimates of the red giant effective temperatures have been obtained from infrared colors. Suitable relations linking these temperatures to the cluster metallicity have been obtained and compared to theoretical predictions. An appropriate set of models for the Sun and Population II giants has been computed by using both the standard solar metallicity (Z/X)solar=0.0275 and the most recently proposed value (Z/X)solar=0.0177. We find that when the standard solar metallicity is adopted, a unique value of α=2.17 can be used to reproduce both the solar radius and the Population II red giant temperature. Conversely, when the new solar metallicity is adopted, two different values of α are required: α=1.86 to fit the solar radius and α~2.0 to fit the red giant temperatures. However, it must be noted that regardless the adopted solar reference, the α-parameter does not show any significant dependence on metallicity. Based on observations collected at the European Southern Observatory (ESO), La Silla, Chile. Also based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundacion Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data
Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong
2015-01-01
Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
Schumacher, Carsten; Eismann, Hendrik; Sieg, Lion; Friedrich, Lars; Scheinichen, Dirk; Vondran, Florian W R; Johanning, Kai
2018-01-01
Liver transplantation is a complex intervention, and early anticipation of personnel and logistic requirements is of great importance. Early identification of high-risk patients could prove useful. We therefore evaluated prognostic values of recipient parameters commonly available in the early preoperative stage regarding postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. All adult patients undergoing first liver transplantation at Hannover Medical School between January 2005 and December 2010 were included in this retrospective study. Demographic, clinical, and laboratory data as well as clinical courses were recorded. Prognostic values regarding 30- and 90-day outcomes were evaluated by uni- and multivariate statistical tests. Identified risk parameters were used to calculate risk scores. There were 426 patients (40.4% female) included with a mean age of 48.6 (11.9) years. Absolute 30-day mortality rate was 9.9%, and absolute 90-day mortality rate was 13.4%. Preoperative leukocyte count >5200/μL, platelet count <91 000/μL, and creatinine values ≥77 μmol/L were relevant risk factors for both observation periods ( P < .05, respectively). A score based on these factors significantly differentiated between groups of varying postoperative outcomes and intraoperative transfusion requirements ( P < .05, respectively). A score based on preoperative creatinine, leukocyte, and platelet values allowed early estimation of postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. Results might help to improve timely logistic and personal strategies.
Gaia FGK benchmark stars: Metallicity
NASA Astrophysics Data System (ADS)
Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.
2014-04-01
Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133
Serum bilirubin: a simple routine surrogate marker of the progression of chronic kidney disease.
Moolchandani, K; Priyadarssini, M; Rajappa, M; Parameswaran, S; Revathy, G
2016-10-01
Studies suggest that Chronic Kidney Disease (CKD) is a global burden health associated with significant comorbid conditions. Few biochemical parameters have gained significance in predicting the disease progression. The present work aimed to study the association of the simple biochemical parameter of serum bilirubin level with the estimated glomerular filtration rate (eGFR), and to assess their association with the co-morbid conditions in CKD. We recruited 188 patients with CKD who attended a Nephrology out-patient department. eGFR values were calculated based on the serum creatinine levels using CKD-EPI formula. Various biochemical parameters including glucose, creatinine, uric acid, total and direct bilirubin were assayed in all study subjects. Study subjects were categorized into subgroups based on their eGFR values and their diabetic status and the parameters were compared among the different subgroups. We observed a significantly decreased serum bilirubin levels (p < 0.001) in patients with lower eGFR values, compared to those with higher eGFR levels. There was a significant positive correlation between the eGFR levels and the total bilirubin levels (r = 0.92). We also observed a significant positive correlation between the eGFR levels and the direct bilirubin levels (r = 0.76). On multivariate linear regression analysis, we found that total and direct bilirubin independently predict eGFR, after adjusting for potential confounders (p < 0.001). Our results suggest that there is significant hypobilirubinemia in CKD, especially with increasing severity and co-existing diabetes mellitus. This finding has importance in the clinical setting, as assay of simple routine biochemical parameters such as serum bilirubin may help in predicting the early progression of CKD and more so in diabetic CKD.
NASA Astrophysics Data System (ADS)
Banerjee, Paromita; Soni, Jalpa; Purwar, Harsh; Ghosh, Nirmalya; Sengupta, Tapas K.
2013-03-01
Development of methods for quantification of cellular association and patterns in growing bacterial colony is of considerable current interest, not only to help understand multicellular behavior of a bacterial species but also to facilitate detection and identification of a bacterial species in a given space and under a given set of condition(s). We have explored quantitative spectral light scattering polarimetry for probing the morphological and structural changes taking place during colony formations of growing Bacillus thuringiensis bacteria under different conditions (in normal nutrient agar representing favorable growth environment, in the presence of 1% glucose as an additional nutrient, and 3 mM sodium arsenate as toxic material). The method is based on the measurement of spectral 3×3 Mueller matrices (which involves linear polarization measurements alone) and its subsequent analysis via polar decomposition to extract the intrinsic polarization parameters. Moreover, the fractal micro-optical parameter, namely, the Hurst exponent H, is determined via fractal-Born approximation-based inverse analysis of the polarization-preserving component of the light scattering spectra. Interesting differences are noted in the derived values for the H parameter and the intrinsic polarization parameters (linear diattenuation d, linear retardance δ, and linear depolarization Δ coefficients) of the growing bacterial colonies under different conditions. The bacterial colony growing in presence of 1% glucose exhibit the strongest fractality (lowest value of H), whereas that growing in presence of 3 mM sodium arsenate showed the weakest fractality. Moreover, the values for δ and d parameters are found to be considerably higher for the colony growing in presence of glucose, indicating more structured growth pattern. These findings are corroborated further with optical microscopic studies conducted on the same samples.
NASA Astrophysics Data System (ADS)
Raimondi, L.; Azetsu-Scott, K.; Wallace, D.
2016-02-01
This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.
Reduction of Decision-Making Time in the Air Defense Management
2013-06-01
Cohen, Freeman, & Thompson, 1997), “Threat Evaluation and Weapon Allocation” ( Turan , 2012) and Evaluating the Performance of TEWA Systems (Fredrik...uses these threat values to propose weapon allocation ( Turan , 2012). Turan studied only static based weapon-target allocation. She evaluates and... Turan : - Proximity parameters (CPA, Time to CPA, CPA in units of time, time before hit, distance), - Capability parameters (target type, weapon
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
NASA Astrophysics Data System (ADS)
Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid
2015-07-01
The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.
A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface.
Hamdy, Omnia; El-Azab, Jala; Al-Saeed, Tarek A; Hassan, Mahmoud F; Solouma, Nahed H
2017-09-20
Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters' values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.
TWT transmitter fault prediction based on ANFIS
NASA Astrophysics Data System (ADS)
Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen
2017-11-01
Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.
Organogel formation rationalized by Hansen solubility parameters: influence of gelator structure.
Bonnet, Julien; Suissa, Gad; Raynal, Matthieu; Bouteiller, Laurent
2015-03-21
Some organic compounds form gels in liquids by forming a network of anisotropic fibres. Based on extensive solubility tests of four gelators of similar structures, and on Hansen solubility parameter formalism, we have probed the quantitative effect of a structural variation of the gelator structure on its gel formation ability. Increasing the length of an alkyl group of the gelator obviously reduces its polarity, which leads to a gradual shift of its solubility sphere towards lower δp and δh values. At the same time, its gelation sphere is shifted - to a much stronger extent - towards larger δp and δh values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arapov, Yu. G.; Gudina, S. V.; Klepikova, A. S., E-mail: klepikova@imp.uran.ru
2017-02-15
The dependences of the longitudinal and Hall resistances on a magnetic field in n-InGaAs/GaAs heterostructures with a single and double quantum wells after infrared illumination are measured in the range of magnetic fields Ð’ = 0–16 T and temperatures T = 0.05–4.2 K. Analysis of the experimental results was carried out on a base of two-parameter scaling hypothesis for the integer quantum Hall effect. The value of the second (irrelevant) critical exponent of the theory of two-parameter scaling was estimated.
[Analysis of Camellia rosthorniana populations fecundity].
Cao, Guoxing; Zhong, Zhangcheng; Xie, Deti; Liu, Yun
2004-03-01
With the method of space substituting time, the structure of Camellia rosthorniana populations in three forest communities, i.e., Jiant bamboo forest, coniferous and broad-leaved mixed forest, and evergreen broad-leaved forest in Mt. Jinyun was investigated, and based on static life-tables, the fecundity tables and reproductive value tables of C. rosthorniana populations were constructed. Each reproductive parameter and its relation to bionomic strategies of C. rosthorniana populations were also analyzed. The results indicated that in evergreen broad-leaved forest, C. rosthorniana population had the longest life span and the greatest fitness. The stage of maximum reproductive value increased with increasing stability of the community. The sum of each population's reproductive value, residual reproductive value and total reproductive value for the whole life-history of C. rosthorniana also increased with increasing maturity of the community, showing their inherent relationships with reproductive fitness. As regards to bionomic strategy, C. rosthorniana showed mainly the characteristics of a k-strategies, but in less stable community, the reproductive parameters were greatly changed, showing some characteristics of a r-strategies.
Yazdani, Shahin; Akbarian, Shadi; Pakravan, Mohammad; Doozandeh, Azadeh; Afrouzifar, Mohsen
2015-03-01
To compare ocular biometric parameters using low-coherence interferometry among siblings affected with different degrees of primary angle closure (PAC). In this cross-sectional comparative study, a total of 170 eyes of 86 siblings from 47 families underwent low-coherence interferometry (LenStar 900; Haag-Streit, Koeniz, Switzerland) to determine central corneal thickness, anterior chamber depth (ACD), aqueous depth (AD), lens thickness (LT), vitreous depth, and axial length (AL). Regression coefficients were applied to show the trend of the measured variables in different stages of angle closure. To evaluate the discriminative power of the parameters, receiver operating characteristic curves were used. Best cutoff points were selected based on the Youden index. Sensitivity, specificity, positive and negative predicative values, positive and negative likelihood ratios, and diagnostic accuracy were determined for each variable. All biometric parameters changed significantly from normal eyes to PAC suspects, PAC, and PAC glaucoma; there was a significant stepwise decrease in central corneal thickness, ACD, AD, vitreous depth, and AL, and an increase in LT and LT/AL. Anterior chamber depth and AD had the best diagnostic power for detecting angle closure; best levels of sensitivity and specificity were obtained with cutoff values of 3.11 mm for ACD and 2.57 mm for AD. Biometric parameters measured by low-coherence interferometry demonstrated a significant and stepwise change among eyes affected with various degrees of angle closure. Although the current classification scheme for angle closure is based on anatomical features, it has excellent correlation with biometric parameters.
NASA Technical Reports Server (NTRS)
Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy
2000-01-01
The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
NASA Astrophysics Data System (ADS)
Zhou, H.; Liu, W.; Ning, T.
2017-12-01
Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.
NASA Astrophysics Data System (ADS)
Sadler, Laurel
2017-05-01
In today's battlefield environments, analysts are inundated with real-time data received from the tactical edge that must be evaluated and used for managing and modifying current missions as well as planning for future missions. This paper describes a framework that facilitates a Value of Information (VoI) based data analytics tool for information object (IO) analysis in a tactical and command and control (C2) environment, which reduces analyst work load by providing automated or analyst assisted applications. It allows the analyst to adjust parameters for data matching of the IOs that will be received and provides agents for further filtering or fusing of the incoming data. It allows for analyst enhancement and markup to be made to and/or comments to be attached to the incoming IOs, which can then be re-disseminated utilizing the VoI based dissemination service. The analyst may also adjust the underlying parameters before re-dissemination of an IO, which will subsequently adjust the value of the IO based on this new/additional information that has been added, possibly increasing the value from the original. The framework is flexible and extendable, providing an easy to use, dynamically changing Command and Control decision aid that focuses and enhances the analyst workflow.
A novel chaos-based image encryption algorithm using DNA sequence operations
NASA Astrophysics Data System (ADS)
Chai, Xiuli; Chen, Yiran; Broyde, Lucie
2017-01-01
An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.
Color distribution of a shade guide in the value, chroma, and hue scale.
Ahn, Jin-Soo; Lee, Yong-Keun
2008-07-01
Shade tabs in a shade guide are matched to teeth in the order of value, hue, and chroma; therefore, information on the distribution of shade tabs is essential for clinical application of a shade guide. However, there is limited information on the color distribution as sorted by these 3 parameters of a recently introduced shade guide. The purposes of this study were to determine the color distributions of tabs from a shade guide in the value (CIE L*), chroma (C*(ab)), and hue scale, and to determine the distribution of step intervals between adjacent tabs by value and chroma. The color of shade tabs (n=29) from a shade guide (Vitapan 3D-Master) was measured to determine the distribution of shade tabs by the value, chroma, hue angle, and CIE a* and b* values. The distribution of the ratios of the value and the chroma of each tab, when compared with the lowest value tab or the lowest chroma tab, was also determined. The data for each color parameter were analyzed by a 3-way ANOVA with the factors of value, chroma, and hue designations of the tabs (alpha=.05). The value, chroma, hue angle, and CIE a* and b* values were influenced by the value, chroma, and hue designations of shade tabs (P<.001). The distributions of the chroma of the tabs within the same value group were relatively ordered, but the values of different value groups overlapped in several instances. Distributions for the CIE a* and b* values reflected the chroma designations in each value group. In the same value group, L, M, and R hue designations corresponded with the manufacturer's stated hue, such as a yellow hue for the L designation and a red hue for the R designation. The distance in the value and chroma scales between adjacent tabs was not uniform. The color distribution of the Vitapan 3D-Master shade guide was more ordered than previously reported color distributions of other, traditional shade guides. However, the interval in the color parameters between adjacent tabs was not uniform; therefore, shade tabs spaced equally, according to the color parameters, should be studied based on the observer's response data.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
TU-FG-201-09: Predicting Accelerator Dysfunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, C; Nguyen, C; Baydush, A
Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zhang, S.; Zheng, X.; Shangguan, W.
2016-12-01
A global carbon assimilation system (GCAS) that assimilates ground-based atmospheric CO2 data is used to estimate several key parameters in a terrestrial ecosystem model for the purpose of improving carbon cycle simulation. The optimized parameters are the leaf maximum carboxylation rate at 25° (Vmax25 ), the temperature sensitivity of ecosystem respiration (Q10), and the soil carbon pool size. The optimization is performed at the global scale at 1°resolution for the period from 2002 to 2008. Optimized multi-year average Vmax25 values range from 49 to 51 μmol m-2 s-1 over most regions of world. Vegetation from tropical zones has relatively lower values than vegetation in temperate regions. Optimized multi-year average Q10 values varied from 1.95 to 2.05 over most regions of the world. Relatively high values of Q10 are derived over high/mid latitude regions. Both Vmax25 and Q10 exhibit pronounced seasonal variations at mid-high latitudes. The maximum in occurs during the growing season, while the minima appear during non-growing seasons. Q10 values decreases with increasing temperature. The seasonal variabilities of and Q10 are larger at higher latitudes with tropical or low latitude regions showing little seasonal variabilities.
Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.
2014-07-01
The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.
Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li
2016-01-01
The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
NASA Astrophysics Data System (ADS)
Li, Chao-Ying; Liu, Shi-Fei; Fu, Jin-Xian
2015-11-01
High-order perturbation formulas for a 3d9 ion in rhombically elongated octahedral was applied to calculate the electron paramagnetic resonance (EPR) parameters (the g factors, gi, and the hyperfine structure constants Ai, i = x, y, z) of the rhombic Cu2+ center in CoNH4PO4.6H2O. In the calculations, the required crystal-field parameters are estimated from the superposition model which enables correlation of the crystal-field parameters and hence the EPR parameters with the local structure of the rhombic Cu2+ center. Based on the calculations, the ligand octahedral (i.e. [Cu(H2O)6]2+ cluster) are found to experience the local bond length variations ΔZ (≈0.213 Å) and δr (≈0.132 Å) along axial and perpendicular directions due to the Jahn-Teller effect. Theoretical EPR parameters based on the above local structure are in good agreement with the observed values; the results are discussed.
Full-envelope aerodynamic modeling of the Harrier aircraft
NASA Technical Reports Server (NTRS)
Mcnally, B. David
1986-01-01
A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.
Upper limb load as a function of repetitive task parameters: part 1--a model of upper limb load.
Roman-Liu, Danuta
2005-01-01
The aim of the study was to develop a theoretical indicator of upper limb musculoskeletal load based on repetitive task parameters. As such the dimensionless parameter, Integrated Cycle Load (ICL) was accepted. It expresses upper limb load which occurs during 1 cycle. The indicator is based on a model of a repetitive task, which consists of a model of the upper limb, a model of basic types of upper limb forces and a model of parameters of a repetitive task such as length of the cycle, length of periods of the cycle and external force exerted during each of the periods of the cycle. Calculations of the ICL parameter were performed for 12 different variants of external load characterised by different values of repetitive task parameters. A comparison of ICL, which expresses external load with a physiological indicator of upper limb load, is presented in Part 2 of the paper.
Estimated value of insurance premium due to Citarum River flood by using Bayesian method
NASA Astrophysics Data System (ADS)
Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.
2018-03-01
Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.
Wei, Zi-min; Wang, Xing-lei; Pan, Hong-wei; Zhao, Yue; Xie, Xin-yu; Zhao, Yi; Zhang, Lin-xue; Zhao, Tao-zhi
2015-10-01
The characteristics of fluorescence spectra of dissolved organic matter (DOM) derived from composting is one of the key ways to assess the compost maturity. However, the existing methods mainly focus on the qualitative description for the humification degree of compost. In this paper, projection pursuit classification (PPC) was conducted to quantitative assess the grades of compost maturity, based on the characteristics of fluorescence spectra of DOM. Eight organic wastes (chicken manure, swine manure, kitchen waste, lawn waste, fruits and vegetables waste, straw, green waste, and municipal solid waste) composting were conducted, the germination percentage (GI) and fluorescence spectra of DOM were measured during composting. Statistic analysis with all fluorescence parameters of DOM indicated that I436/I383 (a ratio between the fluorescence intensities at 436 and 383 nm in excitation spectra), FLR (an area ratio between fulvic-like region from 308 to 363 nm and total region in emission spectra), P(HA/Pro) (a regional integration ratio between humic acid-like region to protein-like region in excitation emission matrix (EEM) spectra), A4/A1 (an area ratio of the last quarter to the first quarter in emission spectra), r(A,C) (a ratio between the fluorescence intensities of peak A and peak C in EEM spectra) were correlated with each other (p < 0.01), suggesting that this fluorescence parameters could be considered as comprehensive evaluation index system of PPC. Subsequently, the four degrades of compost maturity included the best degree of maturity (I, GI > 80%), better degree of compost maturity (II, 60% < GI < 80%), maturity (III, 50% < GI < 60%), and immaturity (IV, GI < 50%) were divided according the GI value during composting. The corresponding fluorescence parameter values were calculated at each degrade of compost maturity. Then the projection values were calculated based on PPC considering the above fluorescence parameter values. The projection value was 2.01 - 2.22 for I grade, 1.21 - 2.0 for II grade, 0.57 - 1.2 for III grade, and 0.10 - 0.56 for IV grade. Model validation was then carried out with composts samples, the results indicated that the simulated values were agreed with the observed values, and the accuracy of PPC was 75% for four grades of maturity, and 100% for maturity and immaturity, suggesting that PPC could meet the need of the assessment of compost maturity.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.
2013-12-01
In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value of τ is at 0.5 to 0.6.
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
Quantitative evaluation of the lumbosacral sagittal alignment in degenerative lumbar spinal stenosis
Makirov, Serik K.; Jahaf, Mohammed T.; Nikulina, Anastasia A.
2015-01-01
Goal of the study This study intends to develop a method of quantitative sagittal balance parameters assessment, based on a geometrical model of lumbar spine and sacrum. Methods One hundred eight patients were divided into 2 groups. In the experimental group have been included 59 patients with lumbar spinal stenosis on L1-5 level. Forty-nine healthy volunteers without history of any lumbar spine pathlogy were included in the control group. All patients have been examined with supine MRI. Lumbar lordosis has been adopted as circular arc and described either anatomical (lumbar lordosis angle), or geometrical (chord length, circle segment height, the central angle, circle radius) parameters. Moreover, 2 sacral parameters have been assessed for all patients: sacral slope and sacral deviation angle. Both parameters characterize sacrum disposition in horizontal and vertical axis respectively. Results Significant correlation was observed between anatomical and geometrical lumbo-sacral parameters. Significant differences between stenosis group and control group were observed in the value of the “central angle” and “sacral deviation” parameters. We propose additional parameters: lumbar coefficient, as ratio of the lordosis angle to the segmental angle (Kl); sacral coefficient, as ratio of the sacral tilt (ST) to the sacral deviation (SD) angle (Ks); and assessment modulus of the mathematical difference between sacral and lumbar coefficients has been used for determining lumbosacral balance (LSB). Statistically significant differences between main and control group have been obtained for all described coefficients (p = 0.006, p = 0.0001, p = 0.0001, accordingly). Median of LSB value of was 0.18 and 0.34 for stenosis and control groups, accordingly. Conclusion Based on these results we believe that that spinal stenosis is associated with an acquired deformity that is measureable by the described parameters. It's possible that spinal stenosis occurs in patients with an LSB of 0.2 or less, so this value can be predictable for its development. It may suggest that spinal stenosis is more likely to occur in patients with the spinal curvature of this type because of abnormal distribution of the spine loads. This fact may have prognostic significance for develop vertebral column disease and evaluation of treatment results. PMID:26767160
Safety assessment of a shallow foundation using the random finite element method
NASA Astrophysics Data System (ADS)
Zaskórski, Łukasz; Puła, Wojciech
2015-04-01
A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.
A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas
NASA Technical Reports Server (NTRS)
Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.
1986-01-01
An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.
Bertleff, Marco; Domsch, Sebastian; Weingärtner, Sebastian; Zapp, Jascha; O'Brien, Kieran; Barth, Markus; Schad, Lothar R
2017-12-01
Artificial neural networks (ANNs) were used for voxel-wise parameter estimation with the combined intravoxel incoherent motion (IVIM) and kurtosis model facilitating robust diffusion parameter mapping in the human brain. The proposed ANN approach was compared with conventional least-squares regression (LSR) and state-of-the-art multi-step fitting (LSR-MS) in Monte-Carlo simulations and in vivo in terms of estimation accuracy and precision, number of outliers and sensitivity in the distinction between grey (GM) and white (WM) matter. Both the proposed ANN approach and LSR-MS yielded visually increased parameter map quality. Estimations of all parameters (perfusion fraction f, diffusion coefficient D, pseudo-diffusion coefficient D*, kurtosis K) were in good agreement with the literature using ANN, whereas LSR-MS resulted in D* overestimation and LSR yielded increased values for f and D*, as well as decreased values for K. Using ANN, outliers were reduced for the parameters f (ANN, 1%; LSR-MS, 19%; LSR, 8%), D* (ANN, 21%; LSR-MS, 25%; LSR, 23%) and K (ANN, 0%; LSR-MS, 0%; LSR, 15%). Moreover, ANN enabled significant distinction between GM and WM based on all parameters, whereas LSR facilitated this distinction only based on D and LSR-MS on f, D and K. Overall, the proposed ANN approach was found to be superior to conventional LSR, posing a powerful alternative to the state-of-the-art method LSR-MS with several advantages in the estimation of IVIM-kurtosis parameters, which might facilitate increased applicability of enhanced diffusion models at clinical scan times. Copyright © 2017 John Wiley & Sons, Ltd.
New Kohn-Sham density functional based on microscopic nuclear and neutron matter equations of state
NASA Astrophysics Data System (ADS)
Baldo, M.; Robledo, L. M.; Schuck, P.; Viñas, X.
2013-06-01
A new version of the Barcelona-Catania-Paris energy functional is applied to a study of nuclear masses and other properties. The functional is largely based on calculated ab initio nuclear and neutron matter equations of state. Compared to typical Skyrme functionals having 10-12 parameters apart from spin-orbit and pairing terms, the new functional has only 2 or 3 adjusted parameters, fine tuning the nuclear matter binding energy and fixing the surface energy of finite nuclei. An energy rms value of 1.58 MeV is obtained from a fit of these three parameters to the 579 measured masses reported in the Audi and Wapstra [Nucl. Phys. ANUPABL0375-947410.1016/j.nuclphysa.2003.11.003 729, 337 (2003)] compilation. This rms value compares favorably with the one obtained using other successful mean field theories, which range from 1.5 to 3.0 MeV for optimized Skyrme functionals and 0.7 to 3.0 for the Gogny functionals. The other properties that have been calculated and compared to experiment are nuclear radii, the giant monopole resonance, and spontaneous fission lifetimes.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Optimization of IBF parameters based on adaptive tool-path algorithm
NASA Astrophysics Data System (ADS)
Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi
2018-03-01
As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.
Supersensitive ancilla-based adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Larson, Walker; Saleh, Bahaa E. A.
2017-10-01
The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.
Measurement of the Acoustic Nonlinearity Parameter for Biological Media.
NASA Astrophysics Data System (ADS)
Cobb, Wesley Nelson
In vitro measurements of the acoustic nonlinearity parameter are presented for several biological media. With these measurements it is possible to predict the distortion of a finite amplitude wave in biological tissues of current diagnostic and research interest. The measurement method is based on the finite amplitude distortion of a sine wave that is emmitted by a piston source. The growth of the second harmonic component of this wave is measured by a piston receiver which is coaxial with and has the same size as the source. The experimental measurements and theory are compared in order to determine the nonlinearity parameter. The density, sound speed, and attenuation for the medium are determined in order to make this comparison. The theory developed for this study accounts for the influence of both diffraction and attenuation on the experimental measurements. The effects of dispersion, tissue inhomogeneity and gas bubbles within the excised tissues are studied. To test the measurement method, experimental results are compared with established values for the nonlinearity parameter of distilled water, ethylene glycol and glycerol. The agreement between these values suggests that the measurement uncertainty is (+OR-) 5% for liquids and (+OR-) 10% for solid tissues. Measurements are presented for dog blood and bovine serum albumen as a function of concentration. The nonlinearity parameters for liver, kidney and spleen are reported for both human and canine tissues. The values for the fresh tissues displayed little variation (6.8 to 7.8). Measurements for fixed, normal and cirrhotic tissues indicated that the nonlinearity parameter does not depend strongly on pathology. However, the values for fixed tissues were somewhat higher than those of the fresh tissues.
Statistical evaluation of stability data: criteria for change-over-time and data variability.
Bar, Raphael
2003-01-01
In a recently issued ICH Q1E guidance on evaluation of stability data of drug substances and products, the need to perform a statistical extrapolation of a shelf-life of a drug product or a retest period for a drug substance is based heavily on whether data exhibit a change-over-time and/or variability. However, this document suggests neither measures nor acceptance criteria of these two parameters. This paper demonstrates a useful application of simple statistical parameters for determining whether sets of stability data from either accelerated or long-term storage programs exhibit a change-over-time and/or variability. These parameters are all derived from a simple linear regression analysis first performed on the stability data. The p-value of the slope of the regression line is taken as a measure for change-over-time, and a value of 0.25 is suggested as a limit to insignificant change of the quantitative stability attributes monitored. The minimal process capability index, Cpk, calculated from the standard deviation of the regression line, is suggested as a measure for variability with a value of 2.5 as a limit for an insignificant variability. The usefulness of the above two parameters, p-value and Cpk, was demonstrated on stability data of a refrigerated drug product and on pooled data of three batches of a drug substance. In both cases, the determined parameters allowed characterization of the data in terms of change-over-time and variability. Consequently, complete evaluation of the stability data could be pursued according to the ICH guidance. It is believed that the application of the above two parameters with their acceptance criteria will allow a more unified evaluation of stability data.
Paeng, Jin Chul; Keam, Bhumsuk; Kim, Tae Min; Kim, Dong-Wan; Heo, Dae Seog
2018-01-01
Intratumoral heterogeneity has been suggested to be an important resistance mechanism leading to treatment failure. We hypothesized that radiologic images could be an alternative method for identification of tumor heterogeneity. We tested heterogeneity textural parameters on pretreatment FDG-PET/CT in order to assess the predictive value of target therapy. Recurred or metastatic non-small cell lung cancer (NSCLC) subjects with an activating EGFR mutation treated with either gefitinib or erlotinib were reviewed. An exploratory data set (n = 161) and a validation data set (n = 21) were evaluated, and eight parameters were selected for survival analysis. The optimal cutoff value was determined by the recursive partitioning method, and the predictive value was calculated using Harrell’s C-index. Univariate analysis revealed that all eight parameters showed an increased hazard ratio (HR) for progression-free survival (PFS). The highest HR was 6.41 (P<0.01) with co-occurrence (Co) entropy. Increased risk remained present after adjusting for initial stage, performance status (PS), and metabolic volume (MV) (aHR: 4.86, P<0.01). Textural parameters were found to have an incremental predictive value of early EGFR tyrosine kinase inhibitor (TKI) failure compared to that of the base model of the stage and PS (C-index 0.596 vs. 0.662, P = 0.02, by Co entropy). Heterogeneity textural parameters acquired from pretreatment FDG-PET/CT are highly predictive factors for PFS of EGFR TKI in EGFR-mutated NSCLC patients. These parameters are easily applicable to the identification of a subpopulation at increased risk of early EGFR TKI failure. Correlation to genomic alteration should be determined in future studies. PMID:29385152
Parsons, Nola J; Schaefer, Adam M; van der Spuy, Stephen D; Gous, Tertius A
2015-03-25
There are few publications on the clinical haematology and biochemistry of African penguins (Spheniscus demersus) and these are based on captive populations. Baseline haematology and serum biochemistry parameters were analysed from 108 blood samples from wild, adult African penguins. Samples were collected from the breeding range of the African penguin in South Africa and the results were compared between breeding region and sex. The haematological parameters that were measured were: haematocrit, haemoglobin, red cell count and white cell count. The biochemical parameters that were measured were: sodium, potassium, chloride, calcium, inorganic phosphate, creatinine, cholesterol, serum glucose, uric acid, bile acid, total serum protein, albumin, aspartate transaminase and creatine kinase. All samples were serologically negative for selected avian diseases and no blood parasites were detected. No haemolysis was present in any of the analysed samples. Male African penguins were larger and heavier than females, with higher haematocrit, haemoglobin and red cell count values, but lower calcium and phosphate values. African penguins in the Eastern Cape were heavier than those in the Western Cape, with lower white cell count and globulin values and a higher albumin/globulin ratio, possibly indicating that birds are in a poorer condition in the Western Cape. Results were also compared between multiple penguin species and with African penguins in captivity. These values for healthy, wild, adult penguins can be used for future health and disease assessments.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.
Musings on cosmological relaxation and the hierarchy problem
NASA Astrophysics Data System (ADS)
Jaeckel, Joerg; Mehta, Viraf M.; Witkowski, Lukas T.
2016-03-01
Recently Graham, Kaplan and Rajendran proposed cosmological relaxation as a mechanism for generating a hierarchically small Higgs vacuum expectation value. Inspired by this we collect some thoughts on steps towards a solution to the electroweak hierarchy problem and apply them to the original model of cosmological relaxation [Phys. Rev. Lett. 115, 221801 (2015)]. To do so, we study the dynamics of the model and determine the relation between the fundamental input parameters and the electroweak vacuum expectation value. Depending on the input parameters the model exhibits three qualitatively different regimes, two of which allow for hierarchically small Higgs vacuum expectation values. One leads to standard electroweak symmetry breaking whereas in the other regime electroweak symmetry is mainly broken by a Higgs source term. While the latter is not acceptable in a model based on the QCD axion, in non-QCD models this may lead to new and interesting signatures in Higgs observables. Overall, we confirm that cosmological relaxation can successfully give rise to a hierarchically small Higgs vacuum expectation value if (at least) one model parameter is chosen sufficiently small. However, we find that the required level of tuning for achieving this hierarchy in relaxation models can be much more severe than in the Standard Model.
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Arisan, Volkan; Karabuda, Zihni Cüneyt; Avsever, Hakan; Özdemir, Tayfun
2013-12-01
The relationship of conventional multi-slice computed tomography (CT)- and cone beam CT (CBCT)-based gray density values and the primary stability parameters of implants that were placed by stereolithographic surgical guides were analyzed in this study. Eighteen edentulous jaws were randomly scanned by a CT (CT group) or a CBCT scanner (CBCT group) and radiographic gray density was measured from the planned implants. A total of 108 implants were placed, and primary stability parameters were measured by insertion torque value (ITV) and resonance frequency analysis (RFA). Radiographic and subjective bone quality classification (BQC) was also classified. Results were analyzed by correlation tests and multiple regressions (p < .05). CBCT-based gray density values (765 ± 97.32 voxel value) outside the implants were significantly higher than those of CT-based values (668.4 ± 110 Hounsfield unit, p < .001). Significant relations were found among the gray density values outside the implants, ITV (adjusted r(2) = 0.6142, p = .001 and adjusted r(2) = 0.5166, p = .0021), and RFA (adjusted r(2) = 0.5642, p = .0017 and adjusted r(2) = 0.5423, p = .0031 for CT and CBCT groups, respectively). Data from radiographic and subjective BQC were also in agreement. Similar to the gray density values of CT, that of CBCT could also be predictive for the subjective BQC and primary implant stability. Results should be confirmed on different CBCT scanners. © 2012 Wiley Periodicals, Inc.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
An economic evaluation of maxillary implant overdentures based on six vs. four implants.
Listl, Stefan; Fischer, Leonhard; Giannakopoulos, Nikolaos Nikitas
2014-08-18
The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients' denture satisfaction, the respective cost-effectiveness threshold varies substantially. The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes.
An economic evaluation of maxillary implant overdentures based on six vs. four implants
2014-01-01
Background The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. Methods A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Results Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients’ denture satisfaction, the respective cost-effectiveness threshold varies substantially. Conclusions The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes. PMID:25135370
GNSS VTEC calibration using satellite altimetry and LEO data
NASA Astrophysics Data System (ADS)
Alizadeh, M. Mahdi; Schuh, Harald
2015-04-01
Among different systems remote sensing the ionosphere, space geodetic techniques have turned into a promising tool for monitoring and modeling the ionospheric parameters. Due to the fact that ionosphere is a dispersive medium, the signals travelling through this medium provide information about the parameters of the ionosphere in terms of Total Electron Content (TEC) or electron density along the ray path. The classical input data for development of Global Ionosphere Maps (GIM) of the Vertical Total Electron Content (VTEC) is obtained from the dual-frequency Global Navigation Satellite Systems (GNSS) ground-based observations. Nevertheless due to the fact that GNSS ground stations are in-homogeneously distributed with poor coverage over the oceans (namely southern Pacific and southern Atlantic) and also parts of Africa, the precision of VTEC maps are rather low in these areas. From long term analyses it is believed that the International GNSS Service (IGS) VTEC maps have an accuracy of 1-2 TECU in areas well covered with GNSS receivers; conversely, in areas with poor coverage the accuracy can be degraded by a factor of up to five. On the other hand dual-frequency satellite altimetry missions (such as Jason-1&2) provide direct VTEC values exactly over the oceans, and furthermore the Low Earth Orbiting (LEO) satellites such as the Formosat-3/COSMIC (F/C) provide about a great number of globally distributed occultation measurements per day, which can be used to obtain VTEC values. Combining these data with the ground-based data improves the accuracy and reliability of the VTEC maps by closing of observation gaps that arise when using ground-based data only. In this approach an essential step is the evaluation and calibration of the different data sources used for the combination procedure. This study investigates the compatibility of calibrated TEC observables derived from GNSS dual-frequency data, recorded at global ground-based station networks, with space-based TEC values from satellite altimetry and F/C observations. In the current procedure the ground-based GNSS observations have been used to develop a GNSS-only GIM, using the parameter estimation technique. The VTEC values extracted from these models have been quantified and calibrated with the raw altimetry and LEO measurements. The calibrated values have been consequently used for developing the combined GIMs of the VTEC.
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2011 CFR
2011-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2013 CFR
2013-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2012 CFR
2012-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
40 CFR 270.24 - Specific part B information requirements for process vents.
Code of Federal Regulations, 2014 CFR
2014-07-01
... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...
Assessment of source-based nitrogen removal alternatives in leather tanning industry wastewater.
Zengin, G; Olmez, T; Doğruel, S; Kabdaşli, I; Tünay, O
2002-01-01
Nitrogen is an important parameter of leather tanning wastewaters. Magnesium ammonium phosphate (MAP) precipitation is a chemical treatment alternative for ammonia removal. In this study, a detailed source-based wastewater characterisation of a bovine leather tannery was made and nitrogen speciation as well as other basic pollutant parameter values was evaluated. This evaluation has led to definition of alternatives for source-based MAP treatment. MAP precipitation experiments conducted on these alternatives have yielded over 90% ammonia removal at pH 9.5 and using stoichiometric doses. Among the alternatives tested liming-deliming and bating-washing was found to be the most advantageous providing 71% ammonia removal.
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Resta, Ferruccio; Borroni, Massimo; Cazzulani, Gabriele
2014-04-01
A new method for the real-time identification of mechanical system modal parameters is used in order to design different adaptive control logics aiming to reduce the vibrations in a carbon fiber plate smart structure. It is instrumented with three piezoelectric actuators, three accelerometers and three strain gauges. The real-time identification is based on a recursive subspace tracking algorithm whose outputs are elaborated by an ARMA model. A statistical approach is finally applied to choose the modal parameter correct values. These are given in input to model-based control logics such as a gain scheduling and an adaptive LQR control.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.
2012-03-01
The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN parameter values corresponding to various soil, land cover, and land management conditions can be selected from tables, but it is preferable to estimate the CN value from measured rainfall-runoff data if available. However, previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. Hence, they suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of soils and land cover spatial variability on its hydrologic response is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behaviour of the CN-rainfall function produced by the simplified two-CN system is approached theoretically, it is analysed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous methods based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
A generic biokinetic model for noble gases with application to radon.
Leggett, Rich; Marsh, James; Gregoratto, Demetrio; Blanchardon, Eric
2013-06-01
To facilitate the estimation of radiation doses from intake of radionuclides, the International Commission on Radiological Protection (ICRP) publishes dose coefficients (dose per unit intake) based on reference biokinetic and dosimetric models. The ICRP generally has not provided biokinetic models or dose coefficients for intake of noble gases, but plans to provide such information for (222)Rn and other important radioisotopes of noble gases in a forthcoming series of reports on occupational intake of radionuclides (OIR). This paper proposes a generic biokinetic model framework for noble gases and develops parameter values for radon. The framework is tailored to applications in radiation protection and is consistent with a physiologically based biokinetic modelling scheme adopted for the OIR series. Parameter values for a noble gas are based largely on a blood flow model and physical laws governing transfer of a non-reactive and soluble gas between materials. Model predictions for radon are shown to be consistent with results of controlled studies of its biokinetics in human subjects.
Dominance-based ranking functions for interval-valued intuitionistic fuzzy sets.
Chen, Liang-Hsuan; Tu, Chien-Cheng
2014-08-01
The ranking of interval-valued intuitionistic fuzzy sets (IvIFSs) is difficult since they include the interval values of membership and nonmembership. This paper proposes ranking functions for IvIFSs based on the dominance concept. The proposed ranking functions consider the degree to which an IvIFS dominates and is not dominated by other IvIFSs. Based on the bivariate framework and the dominance concept, the functions incorporate not only the boundary values of membership and nonmembership, but also the relative relations among IvIFSs in comparisons. The dominance-based ranking functions include bipolar evaluations with a parameter that allows the decision-maker to reflect his actual attitude in allocating the various kinds of dominance. The relationship for two IvIFSs that satisfy the dual couple is defined based on four proposed ranking functions. Importantly, the proposed ranking functions can achieve a full ranking for all IvIFSs. Two examples are used to demonstrate the applicability and distinctiveness of the proposed ranking functions.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Plasma Charge Current for Controlling and Monitoring Electron Beam Welding with Beam Oscillation
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-01-01
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process. PMID:23242276
Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.
2016-05-23
The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less
Plasma charge current for controlling and monitoring electron beam welding with beam oscillation.
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-12-14
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process.
On the behavior of certain ink aging curves.
Cantú, Antonio A
2017-09-01
This work treats writing inks, particularly ballpoint pen inks. It reviews those ink aging methods that are based on the analysis (measurement) of ink solvents (e.g., 2-phenoxyethanol, which is the most common among ballpoint pen inks). Each method involves measurements that are components of an ink aging parameter associated with the method. Only mass independent parameters are considered. An ink solvent from an ink that is on an air-exposed substrate will evaporate at a decreasing rate and is never constant as the ink ages. An ink aging parameter should reflect this behavior. That is, the graph of a parameter's experimentally-determined values plotted against ink age (which yields the ink aging curve) should show this behavior. However, some experimentally-determined aging curves contain outlying points that are below or above where they should be or points corresponding to different ages that have the same ordinate (parameter value). Such curves, unfortunately, are useless since such curves show that an ink can appear older or younger than what it should be in one or more of its points or have the same age in two or more of its points. This work explains that one cause of this unexpected behavior is that the parameter values were improperly determined such as when a measurement is made of an ink solvent that is not completely extracted (removed) from an ink sample with a chosen extractor such as dry heat or a solvent. Copyright © 2017 Elsevier B.V. All rights reserved.
Van Geel, Paul J; Murray, Kathleen E
2015-12-01
Twelve instrument bundles were placed within two waste profiles as waste was placed in an operating landfill in Ste. Sophie, Quebec, Canada. The settlement data were simulated using a three-component model to account for primary or instantaneous compression, secondary compression or mechanical creep and biodegradation induced settlement. The regressed model parameters from the first waste layer were able to predict the settlement of the remaining four waste layers with good agreement. The model parameters were compared to values published in the literature. A MSW landfill scenario referenced in the literature was used to illustrate how the parameter values from the different studies predicted settlement. The parameters determined in this study and other studies with total waste heights between 15 and 60 m provided similar estimates of total settlement in the long term while the settlement rates and relative magnitudes of the three components varied. The parameters determined based on studies with total waste heights less than 15m resulted in larger secondary compression indices and lower biodegradation induced settlements. When these were applied to a MSW landfill scenario with a total waste height of 30 m, the settlement was overestimated and provided unrealistic values. This study concludes that more field studies are needed to measure waste settlement during the filling stage of landfill operations and more field data are needed to assess different settlement models and their respective parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
NASA Astrophysics Data System (ADS)
Zhuang, Chao; Zhou, Zhifang; Illman, Walter A.; Guo, Qiaona; Wang, Jinguo
2017-09-01
The classical aquitard-drainage model COMPAC has been modified to simulate the compaction process of a heterogeneous aquitard consisting of multiple sub-units (Multi-COMPAC). By coupling Multi-COMPAC with the parameter estimation code PEST++, the vertical hydraulic conductivity ( K v) and elastic ( S ske) and inelastic ( S skp) skeletal specific-storage values of each sub-unit can be estimated using observed long-term multi-extensometer and groundwater level data. The approach was first tested through a synthetic case with known parameters. Results of the synthetic case revealed that it was possible to accurately estimate the three parameters for each sub-unit. Next, the methodology was applied to a field site located in Changzhou city, China. Based on the detailed stratigraphic information and extensometer data, the aquitard of interest was subdivided into three sub-units. Parameters K v, S ske and S skp of each sub-unit were estimated simultaneously and then were compared with laboratory results and with bulk values and geologic data from previous studies, demonstrating the reliability of parameter estimates. Estimated S skp values ranged within the magnitude of 10-4 m-1, while K v ranged over 10-10-10-8 m/s, suggesting moderately high heterogeneity of the aquitard. However, the elastic deformation of the third sub-unit, consisting of soft plastic silty clay, is masked by delayed drainage, and the inverse procedure leads to large uncertainty in the S ske estimate for this sub-unit.
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
A physiologically based toxicokinetic model for methylmercury in female American kestrels
Nichols, J.W.; Bennett, R.S.; Rossmann, R.; French, J.B.; Sappington, K.G.
2010-01-01
A physiologically based toxicokinetic (PBTK) model was developed to describe the uptake, distribution, and elimination of methylmercury (CH 3Hg) in female American kestrels. The model consists of six tissue compartments corresponding to the brain, liver, kidney, gut, red blood cells, and remaining carcass. Additional compartments describe the elimination of CH3Hg to eggs and growing feathers. Dietary uptake of CH 3Hg was modeled as a diffusion-limited process, and the distribution of CH3Hg among compartments was assumed to be mediated by the flow of blood plasma. To the extent possible, model parameters were developed using information from American kestrels. Additional parameters were based on measured values for closely related species and allometric relationships for birds. The model was calibrated using data from dietary dosing studies with American kestrels. Good agreement between model simulations and measured CH3Hg concentrations in blood and tissues during the loading phase of these studies was obtained by fitting model parameters that control dietary uptake of CH 3Hg and possible hepatic demethylation. Modeled results tended to underestimate the observed effect of egg production on circulating levels of CH3Hg. In general, however, simulations were consistent with observed patterns of CH3Hg uptake and elimination in birds, including the dominant role of feather molt. This model could be used to extrapolate CH 3Hg kinetics from American kestrels to other bird species by appropriate reassignment of parameter values. Alternatively, when combined with a bioenergetics-based description, the model could be used to simulate CH 3Hg kinetics in a long-term environmental exposure. ?? 2010 SETAC.
Lin, H-Y; Hwang-Gu, S-L; Gau, S S-F
2015-07-01
Intra-individual variability in reaction time (IIV-RT), defined by standard deviation of RT (RTSD), is considered as an endophenotype for attention-deficit/hyperactivity disorder (ADHD). Ex-Gaussian distributions of RT, rather than RTSD, could better characterize moment-to-moment fluctuations in neuropsychological performance. However, data of response variability based on ex-Gaussian parameters as an endophenotypic candidate for ADHD are lacking. We assessed 411 adolescents with clinically diagnosed ADHD based on the DSM-IV-TR criteria as probands, 138 unaffected siblings, and 138 healthy controls. The output parameters, mu, sigma, and tau, of an ex-Gaussian RT distribution were derived from the Conners' continuous performance test. Multi-level models controlling for sex, age, comorbidity, and use of methylphenidate were applied. Compared with unaffected siblings and controls, ADHD probands had elevated sigma value, omissions, commissions, and mean RT. Unaffected siblings formed an intermediate group in-between probands and controls in terms of tau value and RTSD. There was no between-group difference in mu value. Conforming to a context-dependent nature, unaffected siblings still had an intermediate tau value in-between probands and controls across different interstimulus intervals. Our findings suggest IIV-RT represented by tau may be a potential endophenotype for inquiry into genetic underpinnings of ADHD in the context of heterogeneity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Moritzer, E.; Leister, C.
2014-05-01
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help of statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moritzer, E., E-mail: elmar.moritzer@ktp.upb.de; Leister, C., E-mail: elmar.moritzer@ktp.upb.de
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help ofmore » statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less