Sample records for identified parameter values

  1. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  2. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  3. An Extreme-Value Approach to Anomaly Vulnerability Identification

    NASA Technical Reports Server (NTRS)

    Everett, Chris; Maggio, Gaspare; Groen, Frank

    2010-01-01

    The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.

  4. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  5. Practical identifiability analysis of a minimal cardiovascular system model.

    PubMed

    Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas

    2017-01-17

    Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.

  6. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  7. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  8. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  9. DD3MAT - a code for yield criteria anisotropy parameters identification.

    NASA Astrophysics Data System (ADS)

    Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2016-08-01

    This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.

  10. Methods for using groundwater model predictions to guide hydrogeologic data collection, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2003-01-01

    Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.

  11. Permutation on hybrid natural inflation

    NASA Astrophysics Data System (ADS)

    Carone, Christopher D.; Erlich, Joshua; Ramos, Raymundo; Sher, Marc

    2014-09-01

    We analyze a model of hybrid natural inflation based on the smallest non-Abelian discrete group S3. Leading invariant terms in the scalar potential have an accidental global symmetry that is spontaneously broken, providing a pseudo-Goldstone boson that is identified as the inflaton. The S3 symmetry restricts both the form of the inflaton potential and the couplings of the inflaton field to the waterfall fields responsible for the end of inflation. We identify viable points in the model parameter space. Although the power in tensor modes is small in most of the parameter space of the model, we identify parameter choices that yield potentially observable values of r without super-Planckian initial values of the inflaton field.

  12. Detecting influential observations in nonlinear regression modeling of groundwater flow

    USGS Publications Warehouse

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  13. Automatic Cloud Classification from Multi-Spectral Satellite Data Over Oceanic Regions

    DTIC Science & Technology

    1992-01-14

    parameters the first two colors used are, blue for low values and dark green for high parameter values. If a third class is identified, the intermediate...intermediate yellow and high dark green classes. The color sequence blue-yellow-light green- dark green, then characterizes the low to high parameter value...to light green then to dark green correspond to superpixels of increasing (from low to high) variability in their altitude, (see Table V.3). When the

  14. Robust design of configurations and parameters of adaptable products

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua

    2014-03-01

    An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.

  15. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Histogram analysis parameters identify multiple associations between DWI and DCE MRI in head and neck squamous cell carcinoma.

    PubMed

    Meyer, Hans Jonas; Leifels, Leonard; Schob, Stefan; Garnov, Nikita; Surov, Alexey

    2018-01-01

    Nowadays, multiparametric investigations of head and neck squamous cell carcinoma (HNSCC) are established. These approaches can better characterize tumor biology and behavior. Diffusion weighted imaging (DWI) can by means of apparent diffusion coefficient (ADC) quantitatively characterize different tissue compartments. Dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) reflects perfusion and vascularization of tissues. Recently, a novel approach of data acquisition, namely histogram analysis of different images is a novel diagnostic approach, which can provide more information of tissue heterogeneity. The purpose of this study was to analyze possible associations between DWI, and DCE parameters derived from histogram analysis in patients with HNSCC. Overall, 34 patients, 9 women and 25 men, mean age, 56.7±10.2years, with different HNSCC were involved in the study. DWI was obtained by using of an axial echo planar imaging sequence with b-values of 0 and 800s/mm 2 . Dynamic T1w DCE sequence after intravenous application of contrast medium was performed for estimation of the following perfusion parameters: volume transfer constant (K trans ), volume of the extravascular extracellular leakage space (Ve), and diffusion of contrast medium from the extravascular extracellular leakage space back to the plasma (Kep). Both ADC and perfusion parameters maps were processed offline in DICOM format with custom-made Matlab-based application. Thereafter, polygonal ROIs were manually drawn on the transferred maps on each slice. For every parameter, mean, maximal, minimal, and median values, as well percentiles 10th, 25th, 75th, 90th, kurtosis, skewness, and entropy were estimated. Сorrelation analysis identified multiple statistically significant correlations between the investigated parameters. Ve related parameters correlated well with different ADC values. Especially, percentiles 10 and 75, mode, and median values showed stronger correlations in comparison to other parameters. Thereby, the calculated correlation coefficients ranged from 0.62 to 0.69. Furthermore, K trans related parameters showed multiple slightly to moderate significant correlations with different ADC values. Strongest correlations were identified between ADC P75 and K trans min (p=0.58, P=0.0007), and ADC P75 and K trans P10 (p=0.56, P=0.001). Only four K ep related parameters correlated statistically significant with ADC fractions. Strongest correlation was found between K ep max and ADC mode (p=-0.47, P=0.008). Multiple statistically significant correlations between, DWI and DCE MRI parameters derived from histogram analysis were identified in HNSCC. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, Raymond D.; Migliori, Albert; Visscher, William M.

    1994-01-01

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a "best" spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere.

  18. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, R.D.; Migliori, A.; Visscher, W.M.

    1994-10-18

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a 'best' spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere. 14 figs.

  19. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  20. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  1. EPR and optical absorption studies of paramagnetic molecular ion (VO2+) in Lithium Sodium Acid Phthalate single crystal

    NASA Astrophysics Data System (ADS)

    Subbulakshmi, N.; Kumar, M. Saravana; Sheela, K. Juliet; Krishnan, S. Radha; Shanmugam, V. M.; Subramanian, P.

    2017-12-01

    Electron Paramagnetic Resonance (EPR) spectroscopic studies of VO2+ ions as paramagnetic impurity in Lithium Sodium Acid Phthalate (LiNaP) single crystal have been done at room temperature on X-Band microwave frequency. The lattice parameter values are obtained for the chosen system from Single crystal X-ray diffraction study. Among the number of hyperfine lines in the EPR spectra only two sets are reported from EPR data. The principal values of g and A tensors are evaluated for the two different VO2+ sites I and II. They possess the crystalline field around the VO2+ as orthorhombic. Site II VO2+ ion is identified as substitutional in place of Na1 location and the other site I is identified as interstitial location. For both sites in LiNaP, VO2+ are identified in octahedral coordination with tetragonal distortion as seen from the spin Hamiltonian parameter values. The ground state of vanadyl ion in the LiNaP single crystal is dxy. Using optical absorption data the octahedral and tetragonal parameters are calculated. By correlating EPR and optical data, the molecular orbital bonding parameters have been discussed for both sites.

  2. Control of complex dynamics and chaos in distributed parameter systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakravarti, S.; Marek, M.; Ray, W.H.

    This paper discusses a methodology for controlling complex dynamics and chaos in distributed parameter systems. The reaction-diffusion system with Brusselator kinetics, where the torus-doubling or quasi-periodic (two characteristic incommensurate frequencies) route to chaos exists in a defined range of parameter values, is used as an example. Poincare maps are used for characterization of quasi-periodic and chaotic attractors. The dominant modes or topos, which are inherent properties of the system, are identified by means of the Singular Value Decomposition. Tested modal feedback control schemas based on identified dominant spatial modes confirm the possibility of stabilization of simple quasi-periodic trajectories in themore » complex quasi-periodic or chaotic spatiotemporal patterns.« less

  3. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  4. Bayesian inference to identify parameters in viscoelasticity

    NASA Astrophysics Data System (ADS)

    Rappel, Hussein; Beex, Lars A. A.; Bordas, Stéphane P. A.

    2017-08-01

    This contribution discusses Bayesian inference (BI) as an approach to identify parameters in viscoelasticity. The aims are: (i) to show that the prior has a substantial influence for viscoelasticity, (ii) to show that this influence decreases for an increasing number of measurements and (iii) to show how different types of experiments influence the identified parameters and their uncertainties. The standard linear solid model is the material description of interest and a relaxation test, a constant strain-rate test and a creep test are the tensile experiments focused on. The experimental data are artificially created, allowing us to make a one-to-one comparison between the input parameters and the identified parameter values. Besides dealing with the aforementioned issues, we believe that this contribution forms a comprehensible start for those interested in applying BI in viscoelasticity.

  5. Parameters Identification of Interface Friction Model for Ceramic Matrix Composites Based on Stress-Strain Response

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Gao, Xiguang; Song, Yingdong

    2017-10-01

    An approach to identify parameters of interface friction model for Ceramic Matrix composites based on stress-strain response was developed. The stress distribution of fibers in the interface slip region and intact region of the damaged composite was determined by adopting the interface friction model. The relation between maximum strain, secant moduli of hysteresis loop and interface shear stress, interface de-bonding stress was established respectively with the method of symbolic-graphic combination. By comparing the experimental strain, secant moduli of hysteresis loop with computation values, the interface shear stress and interface de-bonding stress corresponding to first cycle were identified. Substituting the identification of parameters into interface friction model, the stress-strain curves were predicted and the predicted results fit experiments well. Besides, the influence of number of data points on identifying the value of interface parameters was discussed. And the approach was compared with the method based on the area of hysteresis loop.

  6. Impacts of different types of measurements on estimating unsaturated flow parameters

    NASA Astrophysics Data System (ADS)

    Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru

    2015-05-01

    This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  7. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    NASA Astrophysics Data System (ADS)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  8. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  9. Investigating the relationship between a soils classification and the spatial parameters of a conceptual catchment-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Lilly, A.

    2001-10-01

    There are now many examples of hydrological models that utilise the capabilities of Geographic Information Systems to generate spatially distributed predictions of behaviour. However, the spatial variability of hydrological parameters relating to distributions of soils and vegetation can be hard to establish. In this paper, the relationship between a soil hydrological classification Hydrology of Soil Types (HOST) and the spatial parameters of a conceptual catchment-scale model is investigated. A procedure involving inverse modelling using Monte-Carlo simulations on two catchments is developed to identify relative values for soil related parameters of the DIY model. The relative values determine the internal variability of hydrological processes as a function of the soil type. For three out of the four soil parameters studied, the variability between HOST classes was found to be consistent across two catchments when tested independently. Problems in identifying values for the fourth 'fast response distance' parameter have highlighted a potential limitation with the present structure of the model. The present assumption that this parameter can be related simply to soil type rather than topography appears to be inadequate. With the exclusion of this parameter, calibrated parameter sets from one catchment can be converted into equivalent parameter sets for the alternate catchment on the basis of their HOST distributions, to give a reasonable simulation of flow. Following further testing on different catchments, and modifications to the definition of the fast response distance parameter, the technique provides a methodology whereby it is possible to directly derive spatial soil parameters for new catchments.

  10. Agreement Between Institutional Measurements and Treatment Planning System Calculations for Basic Dosimetric Parameters as Measured by the Imaging and Radiation Oncology Core-Houston

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas

    Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less

  11. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  12. 10 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...

  13. 10 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...

  14. 10 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...

  15. 10 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...

  16. 10 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...

  17. Cutoffs and cardiovascular risk factors associated with neck circumference among community-dwelling elderly adults: a cross-sectional study.

    PubMed

    Coelho, Hélio José; Sampaio, Ricardo Aurélio Carvalho; Gonçalvez, Ivan de Oliveira; Aguiar, Samuel da Silva; Palmeira, Rafael; Oliveira, José Fernando de; Asano, Ricardo Yukio; Sampaio, Priscila Yukari Sewo; Uchida, Marco Carlos

    2016-01-01

    In elderly people, measurement of several anthropometric parameters may present complications. Although neck circumference measurements seem to avoid these issues, the cutoffs and cardiovascular risk factors associated with this parameter among elderly people remain unknown. This study was developed to identify the cutoff values and cardiovascular risk factors associated with neck circumference measurements among elderly people. Cross-sectional study conducted in two community centers for elderly people. 435 elderly adults (371 women and 64 men) were recruited. These volunteers underwent morphological evaluations (body mass index and waist, hip, and neck circumferences) and hemodynamic evaluations (blood pressure values and heart rate). Receiver operating characteristic curve analyses were used to determine the predictive validity of cutoff values for neck circumference, for identifying overweight/obesity. Multivariate analysis was used to identify cardiovascular risk factors associated with large neck circumference. Cutoff values for neck circumference (men = 40.5 cm and women = 35.7 cm), for detection of obese older adults according to body mass index, were identified. After a second analysis, large neck circumference was shown to be associated with elevated body mass index in men; and elevated body mass index, blood pressure values, prevalence of type 2 diabetes and hypertension in women. The data indicate that neck circumference can be used as a screening tool to identify overweight/obesity in older people. Moreover, large neck circumference values may be associated with cardiovascular risk factors.

  18. Effect of the initial configuration for user-object reputation systems

    NASA Astrophysics Data System (ADS)

    Wu, Ying-Ying; Guo, Qiang; Liu, Jian-Guo; Zhang, Yi-Cheng

    2018-07-01

    Identifying the user reputation accurately is significant for the online social systems. For different fair rating parameter q, by changing the parameter values α and β of the beta probability distribution (RBPD) for ranking online user reputation, we investigate the effect of the initial configuration of the RBPD method for the online user ranking performance. Experimental results for the Netflix and MovieLens data sets show that when the parameter q equals to 0.8 and 0.9, the accuracy value AUC would increase about 4.5% and 3.5% for the Netflix data set, while the AUC value increases about 1.5% for the MovieLens data set when the parameter q is 0.9. Furthermore, we investigate the evolution characteristics of the AUC value for different α and β, and find that as the rating records increase, the AUC value increases about 0.2 and 0.16 for the Netflix and MovieLens data sets, indicating that online users' reputations will increase as they rate more and more objects.

  19. Comparison between manual scaling and Autoscala automatic scaling applied to Sodankylä Geophysical Observatory ionograms

    NASA Astrophysics Data System (ADS)

    Enell, Carl-Fredrik; Kozlovsky, Alexander; Turunen, Tauno; Ulich, Thomas; Välitalo, Sirkku; Scotto, Carlo; Pezzopane, Michael

    2016-03-01

    This paper presents a comparison between standard ionospheric parameters manually and automatically scaled from ionograms recorded at the high-latitude Sodankylä Geophysical Observatory (SGO, ionosonde SO166, 64.1° geomagnetic latitude), located in the vicinity of the auroral oval. The study is based on 2610 ionograms recorded during the period June-December 2013. The automatic scaling was made by means of the Autoscala software. A few typical examples are shown to outline the method, and statistics are presented regarding the differences between manually and automatically scaled values of F2, F1, E and sporadic E (Es) layer parameters. We draw the conclusions that: 1. The F2 parameters scaled by Autoscala, foF2 and M(3000)F2, are reliable. 2. F1 is identified by Autoscala in significantly fewer cases (about 50 %) than in the manual routine, but if identified the values of foF1 are reliable. 3. Autoscala frequently (30 % of the cases) detects an E layer when the manual scaling process does not. When identified by both methods, the Autoscala E-layer parameters are close to those manually scaled, foE agreeing to within 0.4 MHz. 4. Es and parameters of Es identified by Autoscala are in many cases different from those of the manual scaling. Scaling of Es at auroral latitudes is often a difficult task.

  20. Micromechanical Modeling of Storage Particles in Lithium Ion Batteries

    NASA Astrophysics Data System (ADS)

    Purkayastha, Rajlakshmi Tarun

    The effect of stress on storage particles within a lithium ion battery, while acknowledged, is not understood very well. In this work three non-dimensional parameters were identified which govern the stress response within a spherical storage particle. These parameters are developed using material properties such as the diffusion coefficient, particle radius, partial molar volume and Young's modulus. Stress maps are then generated for various values of these parameters for fixed rates of insertion, applying boundary conditions similar to those found in a battery. Stress and concentration profiles for various values of these parameters show the coupling between stress and concentration is magnified depending on the values of the parameters. These maps can be used for different materials, depending on the value of the dimensionless parameters. The value of maximum stress generated is calculated for extraction as well as insertion of lithium into the particle. The model was then used to study to ellipsoidal particles in order to ascertain the effect of geometry on the maximum stress within the particle. By performing a parameter study, we can identify those materials for which particular aspect ratios of ellipsoids are more beneficial, in terms of reducing stress. We find that the stress peaks at certain aspect ratios, mostly at 2 and 1/ 2 . A parameter study was also performed on cubic particle. The values of maximum stresses for both insertion and extraction of lithium were plotted as contour plots. It was seen that the material parameters influenced the location of the maximum stress, with the maximum stress occurring either at the center of the edge between two faces or the point at the center of a face. Newer materials such as silicon are being touted as new lithium storage materials for batteries due to their higher capacity. Their tendency to rapidly loose capacity in a short period of time has led to a variety designs such are the use of carbon nanotubes or the use of coatings in order to mitigate the large expansion and stresses, which leads to spalling off of the material. We therefore extended the results for spherical storage particles to include the presence of an additional layer of material surrounding the storage particle. We perform a parameter study to see at which material properties are most beneficial in reducing stresses within the particle, and the results were tabulated. It was seen that thicker layers can lead to mitigation in the value of maximum stresses. A simple fracture analysis was carried out and the material parameters which would most likely cause crack growth to occur were identified. Finally an integrated 2-D model of a lithium ion battery was developed to study the mechanical stress in storage particles as a function of material properties. The effect of morphology on the stress and lithium concentration is studied for the case of extraction of lithium in terms of the previously developed non-dimensional parameters. Both, particles functioning in isolation were studied, as well as in closely-packed systems. The results show that the particle distance from the separator, in combination with the material properties of the particle, is critical in predicting the stress generated within the particle.

  1. Quantitative Analysis of Swallowing Function Between Dysphagia Patients and Healthy Subjects Using High-Resolution Manometry

    PubMed Central

    2017-01-01

    Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816

  2. Quantitative Analysis of Swallowing Function Between Dysphagia Patients and Healthy Subjects Using High-Resolution Manometry.

    PubMed

    Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae

    2017-10-01

    To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.

  3. Efficient Data Generation and Publication as a Test Tool

    NASA Technical Reports Server (NTRS)

    Einstein, Craig Jakob

    2017-01-01

    A tool to facilitate the generation and publication of test data was created to test the individual components of a command and control system designed to launch spacecraft. Specifically, this tool was built to ensure messages are properly passed between system components. The tool can also be used to test whether the appropriate groups have access (read/write privileges) to the correct messages. The messages passed between system components take the form of unique identifiers with associated values. These identifiers are alphanumeric strings that identify the type of message and the additional parameters that are contained within the message. The values that are passed with the message depend on the identifier. The data generation tool allows for the efficient creation and publication of these messages. A configuration file can be used to set the parameters of the tool and also specify which messages to pass.

  4. Classification of materials using nuclear magnetic resonance dispersion and/or x-ray absorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espy, Michelle A.; Matlashov, Andrei N.; Schultz, Larry J.

    Methods for determining the identity of a substance are provided. A classification parameter set is defined to allow identification of substances that previously could not be identified or to allow identification of substances with a higher degree of confidence. The classification parameter set may include at least one of relative nuclear susceptibility (RNS) or an x-ray linear attenuation coefficient (LAC). RNS represents the density of hydrogen nuclei present in a substance relative to the density of hydrogen nuclei present in water. The extended classification parameter set may include T.sub.1, T.sub.2, and/or T.sub.1.rho. as well as at least one additional classificationmore » parameter comprising one of RNS or LAC. Values obtained for additional classification parameters as well as values obtained for T.sub.1, T.sub.2, and T.sub.1.rho. can be compared to known classification parameter values to determine whether a particular substance is a known material.« less

  5. Hematologic and serum chemistry reference intervals for free-ranging lions (Panthera leo).

    PubMed

    Maas, Miriam; Keet, Dewald F; Nielen, Mirjam

    2013-08-01

    Hematologic and serum chemistry values are used by veterinarians and wildlife researchers to assess health status and to identify abnormally high or low levels of a particular blood parameter in a target species. For free-ranging lions (Panthera leo) information about these values is scarce. In this study 7 hematologic and 11 serum biochemistry values were evaluated from 485 lions from the Kruger National Park, South Africa. Significant differences between sexes and sub-adult (≤ 36 months) and adult (>36 months) lions were found for most of the blood parameters and separate reference intervals were made for those values. The obtained reference intervals include the means of the various blood parameter values measured in captive lions, except for alkaline phosphatase in the subadult group. These reference intervals can be utilized for free-ranging lions, and may likely also be used as reference intervals for captive lions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Watershed-based Morphometric Analysis: A Review

    NASA Astrophysics Data System (ADS)

    Sukristiyanti, S.; Maria, R.; Lestiana, H.

    2018-02-01

    Drainage basin/watershed analysis based on morphometric parameters is very important for watershed planning. Morphometric analysis of watershed is the best method to identify the relationship of various aspects in the area. Despite many technical papers were dealt with in this area of study, there is no particular standard classification and implication of each parameter. It is very confusing to evaluate a value of every morphometric parameter. This paper deals with the meaning of values of the various morphometric parameters, with adequate contextual information. A critical review is presented on each classification, the range of values, and their implications. Besides classification and its impact, the authors also concern about the quality of input data, either in data preparation or scale/the detail level of mapping. This review paper hopefully can give a comprehensive explanation to assist the upcoming research dealing with morphometric analysis.

  7. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  8. Parametric response mapping cut-off values that predict survival of hepatocellular carcinoma patients after TACE.

    PubMed

    Nörthen, Aventinus; Asendorf, Thomas; Shin, Hoen-Oh; Hinrichs, Jan B; Werncke, Thomas; Vogel, Arndt; Kirstein, Martha M; Wacker, Frank K; Rodt, Thomas

    2018-04-21

    Parametric response mapping (PRM) is a novel image-analysis technique applicable to assess tumor viability and predict intrahepatic recurrence of hepatocellular carcinoma (HCC) patients treated with transarterial chemoembolization (TACE). However, to date, the prognostic value of PRM for prediction of overall survival in HCC patients undergoing TACE is unclear. The objective of this explorative, single-center study was to identify cut-off values for voxel-specific PRM parameters that predict the post TACE overall survival in HCC patients. PRM was applied to biphasic CT data obtained at baseline and following 3 TACE treatments of 20 patients with HCC tumors ≥ 2 cm. The individual portal venous phases were registered to the arterial phases followed by segmentation of the largest lesion, i.e., the region of interest (ROI). Segmented voxels with their respective arterial and portal venous phase density values were displayed as a scatter plot. Voxel-specific PRM parameters were calculated and compared to patients' survival at 1, 2, and 3 years post treatment to identify the maximal predictive parameters. The hypervascularized tissue portion of the ROI was found to represent an independent predictor of the post TACE overall survival. For this parameter, cut-off values of 3650, 2057, and 2057 voxels, respectively, were determined to be optimal to predict overall survival at 1, 2, and 3 years after TACE. Using these cut points, patients were correctly classified as having died with a sensitivity of 80, 92, and 86% and as still being alive with a specificity of 60, 75, and 83%, respectively. The prognostic accuracy measured by area under the curve (AUC) values ranged from 0.73 to 0.87. PRM may have prognostic value to predict post TACE overall survival in HCC patients.

  9. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation.

    PubMed

    Villaverde, Alejandro F; Banga, Julio R

    2017-11-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability.

  10. Stochastic mechanical model of vocal folds for producing jitter and for identifying pathologies through real voices.

    PubMed

    Cataldo, E; Soize, C

    2018-06-06

    Jitter, in voice production applications, is a random phenomenon characterized by the deviation of the glottal cycle length with respect to a mean value. Its study can help in identifying pathologies related to the vocal folds according to the values obtained through the different ways to measure it. This paper aims to propose a stochastic model, considering three control parameters, to generate jitter based on a deterministic one-mass model for the dynamics of the vocal folds and to identify parameters from the stochastic model taking into account real voice signals experimentally obtained. To solve the corresponding stochastic inverse problem, the cost function used is based on the distance between probability density functions of the random variables associated with the fundamental frequencies obtained by the experimental voices and the simulated ones, and also on the distance between features extracted from the voice signals, simulated and experimental, to calculate jitter. The results obtained show that the model proposed is valid and some samples of voices are synthesized considering the identified parameters for normal and pathological cases. The strategy adopted is also a novelty and mainly because a solution was obtained. In addition to the use of three parameters to construct the model of jitter, it is the discussion of a parameter related to the bandwidth of the power spectral density function of the stochastic process to measure the quality of the signal generated. A study about the influence of all the main parameters is also performed. The identification of the parameters of the model considering pathological cases is maybe of all novelties introduced by the paper the most interesting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  12. Seasonal dependence of the "forecast parameter" based on the EIA characteristics for the prediction of Equatorial Spread F (ESF)

    NASA Astrophysics Data System (ADS)

    Thampi, S. V.; Ravindran, S.; Pant, T. K.; Devasia, C. V.; Sridharan, R.

    2008-06-01

    In an earlier study, Thampi et al. (2006) have shown that the strength and asymmetry of Equatorial Ionization Anomaly (EIA), obtained well ahead of the onset time of Equatorial Spread F (ESF) have a definite role on the subsequent ESF activity, and a new "forecast parameter" has been identified for the prediction of ESF. This paper presents the observations of EIA strength and asymmetry from the Indian longitudes during the period from August 2005 March 2007. These observations are made using the line of sight Total Electron Content (TEC) measured by a ground-based beacon receiver located at Trivandrum (8.5° N, 77° E, 0.5° N dip lat) in India. It is seen that the seasonal variability of EIA strength and asymmetry are manifested in the latitudinal gradients obtained using the relative TEC measurements. As a consequence, the "forecast parameter" also displays a definite seasonal pattern. The seasonal variability of the EIA strength and asymmetry, and the "forecast parameter" are discussed in the present paper and a critical value for has been identified for each month/season. The likely "skill factor" of the new parameter is assessed using the data for a total of 122 days, and it is seen that when the estimated value of the "forecast parameter" exceeds the critical value, the ESF is seen to occur on more than 95% of cases.

  13. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  14. Dynamic Contrast-Enhanced MRI of Cervical Cancers: Temporal Percentile Screening of Contrast Enhancement Identifies Parameters for Prediction of Chemoradioresistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Erlend K.F.; Hole, Knut Hakon; Lund, Kjersti V.

    Purpose: To systematically screen the tumor contrast enhancement of locally advanced cervical cancers to assess the prognostic value of two descriptive parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Methods and Materials: This study included a prospectively collected cohort of 81 patients who underwent DCE-MRI with gadopentetate dimeglumine before chemoradiotherapy. The following descriptive DCE-MRI parameters were extracted voxel by voxel and presented as histograms for each time point in the dynamic series: normalized relative signal increase (nRSI) and normalized area under the curve (nAUC). The first to 100th percentiles of the histograms were included in a log-rank survival test,more » resulting in p value and relative risk maps of all percentile-time intervals for each DCE-MRI parameter. The maps were used to evaluate the robustness of the individual percentile-time pairs and to construct prognostic parameters. Clinical endpoints were locoregional control and progression-free survival. The study was approved by the institutional ethics committee. Results: The p value maps of nRSI and nAUC showed a large continuous region of percentile-time pairs that were significantly associated with locoregional control (p < 0.05). These parameters had prognostic impact independent of tumor stage, volume, and lymph node status on multivariate analysis. Only a small percentile-time interval of nRSI was associated with progression-free survival. Conclusions: The percentile-time screening identified DCE-MRI parameters that predict long-term locoregional control after chemoradiotherapy of cervical cancer.« less

  15. Neural correlates of value, risk, and risk aversion contributing to decision making under risk.

    PubMed

    Christopoulos, George I; Tobler, Philippe N; Bossaerts, Peter; Dolan, Raymond J; Schultz, Wolfram

    2009-10-07

    Decision making under risk is central to human behavior. Economic decision theory suggests that value, risk, and risk aversion influence choice behavior. Although previous studies identified neural correlates of decision parameters, the contribution of these correlates to actual choices is unknown. In two different experiments, participants chose between risky and safe options. We identified discrete blood oxygen level-dependent (BOLD) correlates of value and risk in the ventral striatum and anterior cingulate, respectively. Notably, increasing inferior frontal gyrus activity to low risk and safe options correlated with higher risk aversion. Importantly, the combination of these BOLD responses effectively decoded the behavioral choice. Striatal value and cingulate risk responses increased the probability of a risky choice, whereas inferior frontal gyrus responses showed the inverse relationship. These findings suggest that the BOLD correlates of decision factors are appropriate for an ideal observer to detect behavioral choices. More generally, these biological data contribute to the validity of the theoretical decision parameters for actual decisions under risk.

  16. Full-envelope aerodynamic modeling of the Harrier aircraft

    NASA Technical Reports Server (NTRS)

    Mcnally, B. David

    1986-01-01

    A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.

  17. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  18. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  19. Assessing the quality of life history information in publicly available databases.

    PubMed

    Thorson, James T; Cope, Jason M; Patrick, Wesley S

    2014-01-01

    Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.

  20. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Gross

    2004-09-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less

  1. Identifying desertification risk areas using fuzzy membership and geospatial technique - A case study, Kota District, Rajasthan

    NASA Astrophysics Data System (ADS)

    Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.

    2013-08-01

    Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.

  2. Effect of sexual steroids on boar kinematic sperm subpopulations.

    PubMed

    Ayala, E M E; Aragón, M A

    2017-11-01

    Here, we show the effects of sexual steroids, progesterone, testosterone, or estradiol on motility parameters of boar sperm. Sixteen commercial seminal doses, four each of four adult boars, were analyzed using computer assisted sperm analysis (CASA). Mean values of motility parameters were analyzed by bivariate and multivariate statistics. Principal component analysis (PCA), followed by hierarchical clustering, was applied on data of motility parameters, provided automatically as intervals by the CASA system. Effects of sexual steroids were described in the kinematic subpopulations identified from multivariate statistics. Mean values of motility parameters were not significantly changed after addition of sexual steroids. Multivariate graphics showed that sperm subpopulations were not sensitive to the addition of either testosterone or estradiol, but sperm subpopulations responsive to progesterone were found. Distribution of motility parameters were wide in controls but sharpened at distinct concentrations of progesterone. We conclude that kinematic sperm subpopulations responsive to progesterone are present in boar semen, and these subpopulations are masked in evaluations of mean values of motility parameters. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  3. A Computational Framework for Identifiability and Ill-Conditioning Analysis of Lithium-Ion Battery Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López C, Diana C.; Wozny, Günter; Flores-Tlacuahuac, Antonio

    2016-03-23

    The lack of informative experimental data and the complexity of first-principles battery models make the recovery of kinetic, transport, and thermodynamic parameters complicated. We present a computational framework that combines sensitivity, singular value, and Monte Carlo analysis to explore how different sources of experimental data affect parameter structural ill conditioning and identifiability. Our study is conducted on a modified version of the Doyle-Fuller-Newman model. We demonstrate that the use of voltage discharge curves only enables the identification of a small parameter subset, regardless of the number of experiments considered. Furthermore, we show that the inclusion of a single electrolyte concentrationmore » measurement significantly aids identifiability and mitigates ill-conditioning.« less

  4. Comment on “Two statistics for evaluating parameter identifiability and error reduction” by John Doherty and Randall J. Hunt

    USGS Publications Warehouse

    Hill, Mary C.

    2010-01-01

    Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.

  5. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  6. Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.

    PubMed

    Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver

    2016-01-01

    The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.

  7. Optimizing the bio-optical algorithm for estimating chlorophyll-a and phycocyanin concentrations in inland waters in Korea

    USDA-ARS?s Scientific Manuscript database

    Several bio-optical algorithms were developed to estimate the chlorophyll-a (Chl-a) and phycocyanin (PC) concentrations in inland waters. This study aimed at identifying the influence of the algorithm parameters and wavelength bands on output variables and searching optimal parameter values. The opt...

  8. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  9. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    PubMed

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  10. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation

    PubMed Central

    2017-01-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability. PMID:29186132

  11. Design Space Toolbox V2: Automated Software Enabling a Novel Phenotype-Centric Modeling Strategy for Natural and Synthetic Biological Systems

    PubMed Central

    Lomnitz, Jason G.; Savageau, Michael A.

    2016-01-01

    Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346

  12. The Value of Information in Decision-Analytic Modeling for Malaria Vector Control in East Africa.

    PubMed

    Kim, Dohyeong; Brown, Zachary; Anderson, Richard; Mutero, Clifford; Miranda, Marie Lynn; Wiener, Jonathan; Kramer, Randall

    2017-02-01

    Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced-based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5-21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria-transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs. © 2016 Society for Risk Analysis.

  13. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  14. Optimizing the Determination of Roughness Parameters for Model Urban Canopies

    NASA Astrophysics Data System (ADS)

    Huq, Pablo; Rahman, Auvi

    2018-05-01

    We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.

  15. Adaptive identifier for uncertain complex nonlinear systems based on continuous neural networks.

    PubMed

    Alfaro-Ponce, Mariel; Cruz, Amadeo Argüelles; Chairez, Isaac

    2014-03-01

    This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain.

  16. Tribological Properties of PVD Ti/C-N Nanocoatnigs

    NASA Astrophysics Data System (ADS)

    Leitans, A.; Lungevics, J.; Rudzitis, J.; Filipovs, A.

    2017-04-01

    The present paper discusses and analyses tribological properties of various coatings that increase surface wear resistance. Four Ti/C-N nanocoatings with different coating deposition settings are analysed. Tribological and metrological tests on the samples are performed: 2D and 3D parameters of the surface roughness are measured with modern profilometer, and friction coefficient is measured with CSM Instruments equipment. Roughness parameters Ra, Sa, Sz, Str, Sds, Vmp, Vmc and friction coefficient at 6N load are determined during the experiment. The examined samples have many pores, which is the main reason for relatively large values of roughness parameter. A slight wear is identified in all four samples as well; its friction coefficient values range from 0,.21 to 0.29. Wear rate values are not calculated for the investigated coatings, as no expressed tribotracks are detected on the coating surface.

  17. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  18. Parameter identification of thermophilic anaerobic degradation of valerate.

    PubMed

    Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini

    2003-01-01

    The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.

  19. Critical mass of public goods and its coevolution with cooperation

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Wang, Bing-Hong

    2017-07-01

    In this study, the enhancing parameter represented the value of the public goods to the public in public goods game, and was rescaled to a Fermi-Dirac distribution function of critical mass. Public goods were divided into two categories, consumable and reusable public goods, and their coevolution with cooperative behavior was studied. We observed that for both types of public goods, cooperation was promoted as the enhancing parameter increased when the value of critical mass was not very large. An optimal value of critical mass which led to the best cooperation was identified. We also found that cooperations emerged earlier for reusable public goods, and defections became extinct earlier for the consumable public goods. Moreover, we observed that a moderate depreciation rate for public goods resulted in an optimal cooperation, and this range became wider as the enhancing parameter increased. The noise influence on cooperation was studied, and it was shown that cooperation density varied non-monotonically as noise amplitude increased for reusable public goods, whereas decreased monotonically for consumable public goods. Furthermore, existence of the optimal critical mass was also identified in other three regular networks. Finally, simulation results were utilized to analyze the provision of public goods in detail.

  20. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  1. An Examination of Two Procedures for Identifying Consequential Item Parameter Drift

    ERIC Educational Resources Information Center

    Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu

    2014-01-01

    The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…

  2. What Parameters Do Students Value in Business School Rankings?

    ERIC Educational Resources Information Center

    Mårtensson, Pär; Richtnér, Anders

    2015-01-01

    The starting point of this paper is the question: Which issues do students think are important when choosing a higher education institution, and how are they related to the factors taken into consideration in ranking institutions? The aim is to identify and rank the parameters students perceive as important when choosing their place of education.…

  3. Quantitative Microbial Risk Assessment Tutorial - SDMProjectBuilder: Import Local Data Files to Identify and Modify Contamination Sources and Input ParametersUpdated 2017

    EPA Science Inventory

    Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...

  4. Quantitative Microbial Risk Assessment Tutorial – SDMProjectBuilder: Import Local Data Files to Identify and Modify Contamination Sources and Input Parameters

    EPA Science Inventory

    Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...

  5. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  6. Changes in liver stiffness measurement using acoustic radiation force impulse elastography after antiviral therapy in patients with chronic hepatitis C.

    PubMed

    Chen, Sheng-Hung; Lai, Hsueh-Chou; Chiang, I-Ping; Su, Wen-Pang; Lin, Chia-Hsin; Kao, Jung-Ta; Chuang, Po-Heng; Hsu, Wei-Fan; Wang, Hung-Wei; Chen, Hung-Yao; Huang, Guan-Tarn; Peng, Cheng-Yuan

    2018-01-01

    To compare on-treatment and off-treatment parameters acquired using acoustic radiation force impulse elastography, the Fibrosis-4 (FIB-4) index, and aspartate aminotransferase-to-platelet ratio index (APRI) in patients with chronic hepatitis C (CHC). Patients received therapies based on pegylated interferon or direct-acting antiviral agents. The changes in paired patient parameters, including liver stiffness (LS) values, the FIB-4 index, and APRI, from baseline to sustained virologic response (SVR) visit (24 weeks after the end of treatment) were compared. Multiple regression models were used to identify significant factors that explained the correlations with LS, FIB-4, and APRI values and SVR. A total of 256 patients were included, of which 219 (85.5%) achieved SVR. The paired LS values declined significantly from baseline to SVR visit in all groups and subgroups except the nonresponder subgroup (n = 10). Body mass index (P = 0.0062) and baseline LS (P < 0.0001) were identified as independent factors that explained the LS declines. Likewise, the baseline FIB-4 (P < 0.0001) and APRI (P < 0.0001) values independently explained the declines in the FIB-4 index and APRI, respectively. Moreover, interleukin-28B polymorphisms, baseline LS, and rapid virologic response were identified as independent correlates with SVR. Paired LS measurements in patients treated for CHC exhibited significant declines comparable to those in FIB-4 and APRI values. These declines may have correlated with the resolution of necroinflammation. Baseline LS values predicted SVR.

  7. Assessing the sensitivity of a land-surface scheme to the parameter values using a single column model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitman, A.J.

    The sensitivity of a land-surface scheme (the Biosphere Atmosphere Transfer Scheme, BATS) to its parameter values was investigated using a single column model. Identifying which parameters were important in controlling the turbulent energy fluxes, temperature, soil moisture, and runoff was dependent upon many factors. In the simulation of a nonmoisture-stressed tropical forest, results were dependent on a combination of reservoir terms (soil depth, root distribution), flux efficiency terms (roughness length, stomatal resistance), and available energy (albedo). If moisture became limited, the reservoir terms increased in importance because the total fluxes predicted depended on moisture availability and not on the ratemore » of transfer between the surface and the atmosphere. The sensitivity shown by BATS depended on which vegetation type was being simulated, which variable was used to determine sensitivity, the magnitude and sign of the parameter change, the climate regime (precipitation amount and frequency), and soil moisture levels and proximity to wilting. The interactions between these factors made it difficult to identify the most important parameters in BATS. Therefore, this paper does not argue that a particular set of parameters is important in BATS, rather it shows that no general ranking of parameters is possible. It is also emphasized that using `stand-alone` forcing to examine the sensitivity of a land-surface scheme to perturbations, in either parameters or the atmosphere, is unreliable due to the lack of surface-atmospheric feedbacks.« less

  8. Structure of the Large Magellanic Cloud from near infrared magnitudes of red clump stars

    NASA Astrophysics Data System (ADS)

    Subramanian, S.; Subramaniam, A.

    2013-04-01

    Context. The structural parameters of the disk of the Large Magellanic Cloud (LMC) are estimated. Aims: We used the JH photometric data of red clump (RC) stars from the Magellanic Cloud Point Source Catalog (MCPSC) obtained from the InfraRed Survey Facility (IRSF) to estimate the structural parameters of the LMC disk, such as the inclination, i, and the position angle of the line of nodes (PAlon), φ. Methods: The observed LMC region is divided into several sub-regions, and stars in each region are cross-identified with the optically identified RC stars to obtain the near infrared magnitudes. The peak values of H magnitude and (J - H) colour of the observed RC distribution are obtained by fitting a profile to the distributions and by taking the average value of magnitude and colour of the RC stars in the bin with largest number. Then the dereddened peak H0 magnitude of the RC stars in each sub-region is obtained from the peak values of H magnitude and (J - H) colour of the observed RC distribution. The right ascension (RA), declination (Dec), and relative distance from the centre of each sub-region are converted into x,y, and z Cartesian coordinates. A weighted least square plane fitting method is applied to this x,y,z data to estimate the structural parameters of the LMC disk. Results: An intrinsic (J - H)0 colour of 0.40 ± 0.03 mag in the Simultaneous three-colour InfraRed Imager for Unbiased Survey (SIRIUS) IRSF filter system is estimated for the RC stars in the LMC and a reddening map based on (J - H) colour of the RC stars is presented. When the peaks of the RC distribution were identified by averaging, an inclination of 25°.7 ± 1°.6 and a PAlon = 141°.5 ± 4°.5 were obtained. We estimate a distance modulus, μ = 18.47 ± 0.1 mag to the LMC. Extra-planar features which are both in front and behind the fitted plane are identified. They match with the optically identified extra-planar features. The bar of the LMC is found to be part of the disk within 500 pc. Conclusions: The estimates of the structural parameters are found to be independent of the photometric bands used for the analysis. The radial variation of the structural parameters are also studied. We find that the inner disk, within ~3°.0, is less inclined and has a larger value of PAlon when compared to the outer disk. Our estimates are compared with the literature values, and the possible reasons for the small discrepancies found are discussed.

  9. Relationship between genetic parameters in maize (Zea mays) with seedling growth parameters under 40-100% soil moisture conditions.

    PubMed

    Muhammad, R W; Qayyum, A

    2013-10-18

    We estimated the association of genetic parameters with production characters in 64 maize (Zea mays) genotypes in a green house in soil with 40-100% moisture levels (percent of soil moisture capacity). To identify the major parameters that account for variation among the genotypes, we used single linkage cluster analysis and principle component analysis. Ten plant characters were measured. The first two, four, three, and again three components, with eigen values > 1 contributed 75.05, 80.11, 68.67, and 75.87% of the variability among the genotypes under the different moisture levels, i.e., 40, 60, 80, and 100%, respectively. Other principal components (3-10, 5-10, and 4-10) had eigen values less than 1. The highest estimates of heritability were found for root fresh weight, root volume (0.99), and shoot fresh weight (0.995) in 40% soil moisture. Values of genetic advance ranged from 23.4024 for SR at 40% soil moisture to 0.2538 for shoot dry weight in 60% soil moisture. The high magnitude of broad sense heritability provides evidence that these plant characters are under the control of additive genetic effects. This indicates that selection should lead to fast genetic improvement of the material. The superior agronomic types that we identified may be exploited for genetic potential to improve yield potential of the maize crop.

  10. Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions

    NASA Astrophysics Data System (ADS)

    Vermeulen, Petrus

    2017-04-01

    A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.

  11. Degradation of edible oil during food processing by ultrasound: electron paramagnetic resonance, physicochemical, and sensory appreciation.

    PubMed

    Pingret, Daniella; Durand, Grégory; Fabiano-Tixier, Anne-Sylvie; Rockenbauer, Antal; Ginies, Christian; Chemat, Farid

    2012-08-08

    During ultrasound processing of lipid-containing food, some off-flavors can be detected, which can incite depreciation by consumers. The impacts of ultrasound treatment on sunflower oil using two different ultrasound horns (titanium and pyrex) were evaluated. An electron paramagnetic resonance study was performed to identify and quantify the formed radicals, along with the assessment of classical physicochemical parameters such as peroxide value, acid value, anisidine value, conjugated dienes, polar compounds, water content, polymer quantification, fatty acid composition, and volatiles profile. The study shows an increase of formed radicals in sonicated oils, as well as the modification of physicochemical parameters evidencing an oxidation of treated oils.

  12. Whole lesion histogram analysis of meningiomas derived from ADC values. Correlation with several cellularity parameters, proliferation index KI 67, nucleic content, and membrane permeability.

    PubMed

    Surov, Alexey; Hamerla, Gordian; Meyer, Hans Jonas; Winter, Karsten; Schob, Stefan; Fiedler, Eckhard

    2018-09-01

    To analyze several histopathological features and their possible correlations with whole lesion histogram analysis derived from ADC maps in meningioma. The retrospective study involved 36 patients with primary meningiomas. For every tumor, the following histogram analysis parameters of apparent diffusion coefficient (ADC) were calculated: ADC mean , ADC max , ADC min , ADC median , ADC mode , ADC percentiles: P10, P25, P75, P90, as well kurtosis, skewness, and entropy. All measures were performed by two radiologists. Proliferation index KI 67, minimal, maximal and mean cell count, total nucleic area, and expression of water channel aquaporin 4 (AQP4) were estimated. Spearman's correlation coefficient was used to analyze associations between investigated parameters. A perfect interobserver agreement for all ADC values (0.84-0.97) was identified. All ADC values correlated inversely with tumor cellularity with the strongest correlation between P10, P25 and mean cell count (-0.558). KI 67 correlated inversely with all ADC values except ADC min . ADC parameters did not correlate with total nucleic area. All ADC values correlated statistically significant with expression of AQP4. ADC histogram analysis is a valid method with an excellent interobserver agreement. Cellularity parameters and proliferation potential are associated with different ADC values. Membrane permeability may play a greater role for water diffusion than cell count and proliferation activity. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  14. Investigation of shift in decay hazard (Scheffer) index values over the period 1969-2008 in the conterminous United States

    Treesearch

    Patricia K. Lebow; Charles G. Carll

    2010-01-01

    A statistical analysis was performed that identified time trends in the Scheffer Index value for 167 locations in the conterminous United States over the period 1969-2008. Year-to-year variation in Index values was found to be larger than year-to-year variation in most other weather parameters. Despite the substantial yearly variation, regression equations, with time (...

  15. Line Mixing in Water Vapor and Methane

    NASA Technical Reports Server (NTRS)

    Smith, M. A. H.; Brown, L. R.; Toth, R. A.; Devi, V. Malathy; Benner, Chris

    2006-01-01

    A multispectrum fitting algorithm has been used to identify line mixing and determine mixing parameters for infrared transitions of H2O and CH4 in the 5-9 micrometer region. Line mixing parameters at room temperature were determined for two pairs of transitions in the v2 fundamental band of H2O-16, for self-broadening and for broadening by H2, He, CO2, N2, O2 and air. Line mixing parameters have been determined from air-broadened CH4 spectra, recorded at temperatures between 210 K and 314 K, in selected R-branch manifolds of the v4 band. For both H2O and CH4, the inclusion of line mixing was seen to have a greater effect on the retrieved values of the line shifts than on the retrieved values of other parameters

  16. Temperature, stress, and corrosive sensing apparatus utilizing harmonic response of magnetically soft sensor element (s)

    NASA Technical Reports Server (NTRS)

    Grimes, Craig A. (Inventor); Ong, Keat Ghee (Inventor)

    2003-01-01

    A temperature sensing apparatus including a sensor element made of a magnetically soft material operatively arranged within a first and second time-varying interrogation magnetic field, the first time-varying magnetic field being generated at a frequency higher than that for the second magnetic field. A receiver, remote from the sensor element, is engaged to measure intensity of electromagnetic emissions from the sensor element to identify a relative maximum amplitude value for each of a plurality of higher-order harmonic frequency amplitudes so measured. A unit then determines a value for temperature (or other parameter of interst) using the relative maximum harmonic amplitude values identified. In other aspects of the invention, the focus is on an apparatus and technique for determining a value for of stress condition of a solid analyte and for determining a value for corrosion, using the relative maximum harmonic amplitude values identified. A magnetically hard element supporting a biasing field adjacent the magnetically soft sensor element can be included.

  17. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  18. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    NASA Technical Reports Server (NTRS)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  19. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  20. Synchronization transition of a coupled system composed of neurons with coexisting behaviors near a Hopf bifurcation

    NASA Astrophysics Data System (ADS)

    Jia, Bing

    2014-05-01

    The coexistence of a resting condition and period-1 firing near a subcritical Hopf bifurcation point, lying between the monostable resting condition and period-1 firing, is often observed in neurons of the central nervous systems. Near such a bifurcation point in the Morris—Lecar (ML) model, the attraction domain of the resting condition decreases while that of the coexisting period-1 firing increases as the bifurcation parameter value increases. With the increase of the coupling strength, and parameter and initial value dependent synchronization transition processes from non-synchronization to compete synchronization are simulated in two coupled ML neurons with coexisting behaviors: one neuron chosen as the resting condition and the other the coexisting period-1 firing. The complete synchronization is either a resting condition or period-1 firing dependent on the initial values of period-1 firing when the bifurcation parameter value is small or middle and is period-1 firing when the parameter value is large. As the bifurcation parameter value increases, the probability of the initial values of a period-1 firing neuron that lead to complete synchronization of period-1 firing increases, while that leading to complete synchronization of the resting condition decreases. It shows that the attraction domain of a coexisting behavior is larger, the probability of initial values leading to complete synchronization of this behavior is higher. The bifurcations of the coupled system are investigated and discussed. The results reveal the complex dynamics of synchronization behaviors of the coupled system composed of neurons with the coexisting resting condition and period-1 firing, and are helpful to further identify the dynamics of the spatiotemporal behaviors of the central nervous system.

  1. Reliability of diabetic patients' gait parameters in a challenging environment.

    PubMed

    Allet, L; Armand, S; de Bie, R A; Golay, A; Monnin, D; Aminian, K; de Bruin, E D

    2008-11-01

    Activities of daily life require us to move about in challenging environments and to walk on varied surfaces. Irregular terrain has been shown to influence gait parameters, especially in a population at risk for falling. A precise portable measurement system would permit objective gait analysis under such conditions. The aims of this study are to (a) investigate the reliability of gait parameters measured with the Physilog in diabetic patients walking on different surfaces (tar, grass, and stones); (b) identify the measurement error (precision); (c) identify the minimal clinical detectable change. 16 patients with Type 2 diabetes were measured twice within 8 days. After clinical examination patients walked, equipped with a Physilog, on the three aforementioned surfaces. ICC for each surface was excellent for within-visit analyses (>0.938). Inter-visit ICC's (0.753) were excellent except for the knee range parameter (>0.503). The coefficient of variation (CV) was lower than 5% for most of the parameters. Bland and Altman Plots, SEM and SDC showed precise values, distributed around zero for all surfaces. Good reliability of Physilog measurements on different surfaces suggests that Physilog could facilitate the study of diabetic patients' gait in conditions close to real-life situations. Gait parameters during complex locomotor activities (e.g. stair-climbing, curbs, slopes) have not yet been extensively investigated. Good reliability, small measurement error and values of minimal clinical detectable change recommend the utilization of Physilog for the evaluation of gait parameters in diabetic patients.

  2. Temporal and spatial variations of Gutenberg-Richter parameter and fractal dimension in Western Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Bayrak, Erdem; Yılmaz, Şeyda; Bayrak, Yusuf

    2017-05-01

    The temporal and spatial variations of Gutenberg-Richter parameter (b-value) and fractal dimension (DC) during the period 1900-2010 in Western Anatolia was investigated. The study area is divided into 15 different source zones based on their tectonic and seismotectonic regimes. We calculated the temporal variation of b and DC values in each region using Zmap. The temporal variation of these parameters for the prediction of major earthquakes was calculated. The spatial distribution of these parameters is related to the stress levels of the faults. We observed that b and DC values change before the major earthquakes in the 15 seismic regions. To evaluate the spatial distribution of b and DC values, 0.50° × 0.50° grid interval were used. The b-values smaller than 0.70 are related to the Aegean Arc and Eskisehir Fault. The highest values are related to Sultandağı and Sandıklı Faults. Fractal correlation dimension varies from 1.65 to 2.60, which shows that the study area has a higher DC value. The lowest DC values are related to the joining area between Aegean and Cyprus arcs, Burdur-Fethiye fault zone. Some have concluded that b-values drop instantly before large shocks. Others suggested that temporally stable low b value zones identify future large earthquake locations. The results reveal that large earthquakes occur when b decreases and DC increases, suggesting that variation of b and DC can be used as an earthquake precursor. Mapping of b and DC values provide information about the state of stress in the region, i.e. lower b and higher DC values associated with epicentral areas of large earthquakes.

  3. Vehicle response-based track geometry assessment using multi-body simulation

    NASA Astrophysics Data System (ADS)

    Kraft, Sönke; Causse, Julien; Coudert, Frédéric

    2018-02-01

    The assessment of the geometry of railway tracks is an indispensable requirement for safe rail traffic. Defects which represent a risk for the safety of the train have to be identified and the necessary measures taken. According to current standards, amplitude thresholds are applied to the track geometry parameters measured by recording cars. This geometry-based assessment has proved its value but suffers from the low correlation between the geometry parameters and the vehicle reactions. Experience shows that some defects leading to critical vehicle reactions are underestimated by this approach. The use of vehicle responses in the track geometry assessment process allows identifying critical defects and improving the maintenance operations. This work presents a vehicle response-based assessment method using multi-body simulation. The choice of the relevant operation conditions and the estimation of the simulation uncertainty are outlined. The defects are identified from exceedances of track geometry and vehicle response parameters. They are then classified using clustering methods and the correlation with vehicle response is analysed. The use of vehicle responses allows the detection of critical defects which are not identified from geometry parameters.

  4. Geotechnical Parameters of Alluvial Soils from in-situ Tests

    NASA Astrophysics Data System (ADS)

    Młynarek, Zbigniew; Stefaniak, Katarzyna; Wierzbicki, Jędrzej

    2012-10-01

    The article concentrates on the identification of geotechnical parameters of alluvial soil represented by silts found near Poznan and Elblag. Strength and deformation parameters of the subsoil tested were identified by the CPTU (static penetration) and SDMT (dilatometric) methods, as well as by the vane test (VT). Geotechnical parameters of the subsoil were analysed with a view to using the soil as an earth construction material and as a foundation for buildings constructed on the grounds tested. The article includes an analysis of the overconsolidation process of the soil tested and a formula for the identification of the overconsolidation ratio OCR. Equation 9 reflects the relation between the undrained shear strength and plasticity of the silts analyzed and the OCR value. The analysis resulted in the determination of the Nkt coefficient, which might be used to identify the undrained shear strength of both sediments tested. On the basis of a detailed analysis of changes in terms of the constrained oedometric modulus M0, the relations between the said modulus, the liquidity index and the OCR value were identified. Mayne's formula (1995) was used to determine the M0 modulus from the CPTU test. The usefullness of the sediments found near Poznan as an earth construction material was analysed after their structure had been destroyed and compacted with a Proctor apparatus. In cases of samples characterised by different water content and soil particle density, the analysis of changes in terms of cohesion and the internal friction angle proved that these parameters are influenced by the soil phase composition (Fig. 18 and 19). On the basis of the tests, it was concluded that the most desirable shear strength parameters are achieved when the silt is compacted below the optimum water content.

  5. Geotechnical Parameters of Alluvial Soils from in-situ Tests

    NASA Astrophysics Data System (ADS)

    Młynarek, Zbigniew; Stefaniak, Katarzyna; Wierzbicki, Jedrzej

    2012-10-01

    The article concentrates on the identification of geotechnical parameters of alluvial soil represented by silts found near Poznan and Elblag. Strength and deformation parameters of the subsoil tested were identified by the CPTU (static penetration) and SDMT (dilatometric) methods, as well as by the vane test (VT). Geotechnical parameters of the subsoil were analysed with a view to using the soil as an earth construction material and as a foundation for buildings constructed on the grounds tested. The article includes an analysis of the overconsolidation process of the soil tested and a formula for the identification of the overconsolidation ratio OCR. Equation 9 reflects the relation between the undrained shear strength and plasticity of the silts analyzed and the OCR value. The analysis resulted in the determination of the Nkt coefficient, which might be used to identify the undrained shear strength of both sediments tested. On the basis of a detailed analysis of changes in terms of the constrained oedometric modulus M0, the relations between the said modulus, the liquidity index and the OCR value were identified. Mayne's formula (1995) was used to determine the M0 modulus from the CPTU test. The usefullness of the sediments found near Poznan as an earth construction material was analysed after their structure had been destroyed and compacted with a Proctor apparatus. In cases of samples characterised by different water content and soil particle density, the analysis of changes in terms of cohesion and the internal friction angle proved that these parameters are influenced by the soil phase composition (Fig. 18 and 19). On the basis of the tests, it was concluded that the most desirable shear strength parameters are achieved when the silt is compacted below the optimum water content.

  6. Hyperbolic Discounting: Value and Time Processes of Substance Abusers and Non-Clinical Individuals in Intertemporal Choice

    PubMed Central

    2014-01-01

    The single parameter hyperbolic model has been frequently used to describe value discounting as a function of time and to differentiate substance abusers and non-clinical participants with the model's parameter k. However, k says little about the mechanisms underlying the observed differences. The present study evaluates several alternative models with the purpose of identifying whether group differences stem from differences in subjective valuation, and/or time perceptions. Using three two-parameter models, plus secondary data analyses of 14 studies with 471 indifference point curves, results demonstrated that adding a valuation, or a time perception function led to better model fits. However, the gain in fit due to the flexibility granted by a second parameter did not always lead to a better understanding of the data patterns and corresponding psychological processes. The k parameter consistently indexed group and context (magnitude) differences; it is thus a mixed measure of person and task level effects. This was similar for a parameter meant to index payoff devaluation. A time perception parameter, on the other hand, fluctuated with contexts in a non-predicted fashion and the interpretation of its values was inconsistent with prior findings that supported enlarged perceived delays for substance abusers compared to controls. Overall, the results provide mixed support for hyperbolic models of intertemporal choice in terms of the psychological meaning afforded by their parameters. PMID:25390941

  7. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  8. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  9. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  10. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers

    NASA Astrophysics Data System (ADS)

    Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun

    2015-12-01

    Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.

  11. Structural Identifiability of Dynamic Systems Biology Models

    PubMed Central

    Villaverde, Alejandro F.

    2016-01-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726

  12. Image parameters for maturity determination of a composted material containing sewage sludge

    NASA Astrophysics Data System (ADS)

    Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.

    2013-07-01

    Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.

  13. ENU mutagenesis screening for dominant behavioral mutations based on normal control data obtained in home-cage activity, open-field, and passive avoidance tests.

    PubMed

    Wada, Yumiko; Furuse, Tamio; Yamada, Ikuko; Masuya, Hiroshi; Kushida, Tomoko; Shibukawa, Yoko; Nakai, Yuji; Kobayashi, Kimio; Kaneda, Hideki; Gondo, Yoichi; Noda, Tetsuo; Shiroishi, Toshihiko; Wakana, Shigeharu

    2010-01-01

    To establish the cutoff values for screening ENU-induced behavioral mutations, normal variations in mouse behavioral data were examined in home-cage activity (HA), open-field (OF), and passive-avoidance (PA) tests. We defined the normal range as one that included more than 95% of the normal control values. The cutoffs were defined to identify outliers yielding values that deviated from the normal by less than 5% for C57BL/6J, DBA/2J, DBF(1), and N(2) (DXDB) progenies. Cutoff values for G1-phenodeviant (DBF(1)) identification were defined based on values over +/- 3.0 SD from the mean of DBF(1) for all parameters assessed in the HA and OF tests. For the PA test, the cutoff values were defined based on whether the mice met the learning criterion during the 2nd (at a shock intensity of 0.3 mA) or the 3rd (at a shock intensity of 0.15 mA) retention test. For several parameters, the lower outliers were undetectable as the calculated cutoffs were negative values. Based on the cutoff criteria, we identified 275 behavioral phenodeviants among 2,646 G1 progeny. Of these, 64 were crossed with wild-type DBA/2J individuals, and the phenotype transmission was examined in the G2 progeny using the cutoffs defined for N(2) mice. In the G2 mice, we identified 15 novel dominant mutants exhibiting behavioral abnormalities, including hyperactivity in the HA or OF tests, hypoactivity in the OF test, and PA deficits. Genetic and detailed behavioral analysis of these ENU-induced mutants will provide novel insights into the molecular mechanisms underlying behavior.

  14. Temperature analysis with voltage-current time differential operation of electrochemical sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    A method for temperature analysis of a gas stream. The method includes identifying a temperature parameter of an affected waveform signal. The method also includes calculating a change in the temperature parameter by comparing the affected waveform signal with an original waveform signal. The method also includes generating a value from the calculated change which corresponds to the temperature of the gas stream.

  15. Analysis and design of a genetic circuit for dynamic metabolic engineering.

    PubMed

    Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan

    2013-08-16

    Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.

  16. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  17. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  18. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    NASA Astrophysics Data System (ADS)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.

  19. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  20. Bearing damage assessment using Jensen-Rényi Divergence based on EEMD

    NASA Astrophysics Data System (ADS)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2017-03-01

    An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.

  1. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    PubMed

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Neuromorphic learning of continuous-valued mappings in the presence of noise: Application to real-time adaptive control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Merrill, Walter C.

    1989-01-01

    The ability of feed-forward neural net architectures to learn continuous-valued mappings in the presence of noise is demonstrated in relation to parameter identification and real-time adaptive control applications. Factors and parameters influencing the learning performance of such nets in the presence of noise are identified. Their effects are discussed through a computer simulation of the Back-Error-Propagation algorithm by taking the example of the cart-pole system controlled by a nonlinear control law. Adequate sampling of the state space is found to be essential for canceling the effect of the statistical fluctuations and allowing learning to take place.

  3. Utility usage forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosking, Jonathan R. M.; Natarajan, Ramesh

    The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.

  4. Effect of cinnamon on glucose control and lipid parameters.

    PubMed

    Baker, William L; Gutierrez-Williams, Gabriela; White, C Michael; Kluger, Jeffrey; Coleman, Craig I

    2008-01-01

    To perform a meta-analysis of randomized controlled trials of cinnamon to better characterize its impact on glucose and plasma lipids. A systematic literature search through July 2007 was conducted to identify randomized placebo-controlled trials of cinnamon that reported data on A1C, fasting blood glucose (FBG), or lipid parameters. The mean change in each study end point from baseline was treated as a continuous variable, and the weighted mean difference was calculated as the difference between the mean value in the treatment and control groups. A random-effects model was used. Five prospective randomized controlled trials (n = 282) were identified. Upon meta-analysis, the use of cinnamon did not significantly alter A1C, FBG, or lipid parameters. Subgroup and sensitivity analyses did not significantly change the results. Cinnamon does not appear to improve A1C, FBG, or lipid parameters in patients with type 1 or type 2 diabetes.

  5. Whole-Lesion Histogram Analysis of Apparent Diffusion Coefficient for the Assessment of Cervical Cancer.

    PubMed

    Guan, Yue; Shi, Hua; Chen, Ying; Liu, Song; Li, Weifeng; Jiang, Zhuoran; Wang, Huanhuan; He, Jian; Zhou, Zhengyang; Ge, Yun

    2016-01-01

    The aim of this study was to explore the application of whole-lesion histogram analysis of apparent diffusion coefficient (ADC) values of cervical cancer. A total of 54 women (mean age, 53 years) with cervical cancers underwent 3-T diffusion-weighted imaging with b values of 0 and 800 s/mm prospectively. Whole-lesion histogram analysis of ADC values was performed. Paired sample t test was used to compare differences in ADC histogram parameters between cervical cancers and normal cervical tissues. Receiver operating characteristic curves were constructed to identify the optimal threshold of each parameter. All histogram parameters in this study including ADCmean, ADCmin, ADC10%-ADC90%, mode, skewness, and kurtosis of cervical cancers were significantly lower than those of normal cervical tissues (all P < 0.0001). ADC90% had the largest area under receiver operating characteristic curve of 0.996. Whole-lesion histogram analysis of ADC maps is useful in the assessment of cervical cancer.

  6. WE-FG-202-09: Voxel-Level Analysis of Adverse Treatment Response in Pediatric Patients Treated for Ependymoma with Passive Scattering Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, C; The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, TX; Mirkovic, D

    2016-06-15

    Purpose: We identified patients treated for ependymoma with passive scattering proton therapy who subsequently developed treatment-related imaging changes on MRI. We sought to determine if there is any spatial correlation between imaged response, dose, and LET. Methods: A group of 14 patients treated for ependymoma were identified as having post-treatment MR imaging changes observable as T2-FLAIR hyperintensity with or without enhancement on T1 post-contrast sequences. MR images were registered with treatment planning CT images and regions of treatment-related change contoured by a practicing radiation oncologist. The contoured regions were identified as response with voxels represented as 1 while voxels withinmore » the brain outside of the response region were represented as 0. An in-house Monte Carlo system was used to recalculate treatment plans to obtain dose and LET information. Voxels were binned according to LET values in 0.3 keV µm{sup −1} bins. Dose and corresponding response value (0 or 1) for each voxel for a given LET bin were then plotted and fit with the Lyman-Kutcher-Burman dose response model to determine TD{sub 50} and m parameters for each LET value. Response parameters from all patients were then collated, and linear fits of the data were performed. Results: The response parameters TD50 and m both show trends with LET. Outliers were observed due to low numbers of response voxels in some cases. TD{sub 50} values decreased with LET while m increased with LET. The former result would indicate that for higher LET values, the dose is more effective, which is consistent with relative biological effectiveness (RBE) models for proton therapy. Conclusion: A novel method of voxel-level analysis of image biomarker-based adverse patient treatment response in proton therapy according to dose and LET has been presented. Fitted TD{sub 50} values show a decreasing trend with LET supporting the typical models of proton RBE. Funding provided by NIH Program Project Grant 2U19CA021239-35.« less

  7. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua E.; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    2016-09-01

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  8. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of ``stiff'' equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  9. Feasibility of Intravoxel Incoherent Motion for Differentiating Benign and Malignant Thyroid Nodules.

    PubMed

    Tan, Hui; Chen, Jun; Zhao, Yi Ling; Liu, Jin Huan; Zhang, Liang; Liu, Chang Sheng; Huang, Dongjie

    2018-06-13

    This study aimed to preliminarily investigate the feasibility of intravoxel incoherent motion (IVIM) theory in the differential diagnosis of benign and malignant thyroid nodules. Forty-five patients with 56 confirmed thyroid nodules underwent preoperative routine magnetic resonance imaging and IVIM diffusion-weighted imaging. The histopathologic diagnosis was confirmed by surgery. Apparent diffusion coefficient (ADC), perfusion fraction f, diffusivity D, and pseudo-diffusivity D* were quantified. Independent samples t test of IVIM-derived metrics were conducted between benign and malignant nodules. Receiver-operating characteristic analyses were performed to determine the optimal thresholds as well as the sensitivity and specificity for differentiating. Significant intergroup difference was observed in ADC, D, D*, and f (p < 0.001). Malignant tumors featured significantly lower ADC, D and D* values and a higher f value than that of benign nodules. The ADC, D, and D* could distinguish the benign from malignant thyroid nodules, and parameter f differentiate the malignant tumors from benign nodules. The values of the area under the curve for parameter ADC, D, and D* were 0.784 (p = 0.001), 0.795 (p = 0.001), and 0.850 (p < 0.001), separately, of which the area under the curve of f value was the maximum for identifying the malignant from benign nodules, which was 0.841 (p < 0.001). This study suggested that ADC and IVIM-derived metrics, including D, D*, and f, could potentially serve as noninvasive predictors for the preoperative differentiating of thyroid nodules, and f value performed best in identifying the malignant from benign nodules among these parameters. Copyright © 2018 Academic Radiology. Published by Elsevier Inc. All rights reserved.

  10. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  11. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  12. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  13. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  14. Measurement and modelling of the y-direction apparent mass of sitting human body-cushioned seat system

    NASA Astrophysics Data System (ADS)

    Stein, George Juraj; Múčka, Peter; Hinz, Barbara; Blüthner, Ralph

    2009-04-01

    Laboratory tests were conducted using 13 male subjects seated on a cushioned commercial vehicle driver's seat. The hands gripped a mock-up steering wheel and the subjects were in contact with the lumbar region of the backrest. The accelerations and forces in the y-direction were measured during random lateral whole-body vibration with a frequency range between 0.25 and 30 Hz, vibration magnitudes 0.30, 0.98, and 1.92 m s -2 (unweighted root mean square (rms)). Based on these laboratory measurements, a linear multi-degree-of-freedom (mdof) model of the seated human body and cushioned seat in the lateral direction ( y-axis) was developed. Model parameters were identified from averaged measured apparent mass values (modulus and phase) for the three excitation magnitudes mentioned. A preferred model structure was selected from four 3-dof models analysed. The mean subject parameters were identified. In addition, identification of each subject's apparent mass model parameters was performed. The results are compared with previous studies. The developed model structure and the identified parameters can be used for further biodynamical research in seating dynamics.

  15. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  16. [Predictive value of postural and dynamic walking parameters after high-volume lumbar puncture in normal pressure hydrocephalus].

    PubMed

    Mary, P; Gallisa, J-M; Laroque, S; Bedou, G; Maillard, A; Bousquet, C; Negre, C; Gaillard, N; Dutray, A; Fadat, B; Jurici, S; Olivier, N; Cisse, B; Sablot, D

    2013-04-01

    Normal pressure hydrocephalus (NPH) was described by Adams et al. (1965). The common clinical presentation is the triad: gait disturbance, cognitive decline and urinary incontinence. Although these symptoms are suggestive, they are not specific to diagnosis. The improvement of symptoms after high-volume lumbar puncture (hVLP) could be a strong criterion for diagnosis. We tried to determine a specific pattern of dynamic walking and posture parameters in NPH. Additionally, we tried to specify the evolution of these criteria after hVLP and to determine predictive values of ventriculoperitoneal shunting (VPS) efficiency. Sixty-four patients were followed during seven years from January 2002 to June 2009. We identified three periods: before (S1), after hVLP (S2) and after VPS (S3). The following criteria concerned walking and posture parameters: walking parameters were speed, step length and step rhythm; posture parameters were statokinesigram total length and surface, length according to the surface (LFS), average value of equilibration for lateral movements (Xmoyen), anteroposterior movements (Ymoyen), total movement length in lateral axis (longX) and anteroposterior axis (longY). Among the 64 patients included, 22 had VPS and 16 were investigated in S3. All kinematic criteria are decreased in S1 compared with normal values. hVLP improved these criteria significantly (S2). Among posture parameters, only total length and surface of statokinesigram showed improvement in S1, but no improvement in S2. A gain in speed greater or equal to 0.15m/s between S1 and S2 predicted the efficacy of VPS with a positive predictive value (PPV) of 87.1% and a negative predictive value (NPV) of 69.7% (area under the ROC curve [AUC]: 0.86). Kinematic walking parameters are the most disruptive and are partially improved after hVLP. These parameters could be an interesting test for selecting candidates for VPS. These data have to be confirmed in a larger cohort. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  17. T2 values of articular cartilage in clinically relevant subregions of the asymptomatic knee.

    PubMed

    Surowiec, Rachel K; Lucas, Erin P; Fitzcharles, Eric K; Petre, Benjamin M; Dornan, Grant J; Giphart, J Erik; LaPrade, Robert F; Ho, Charles P

    2014-06-01

    In order for T2 mapping to become more clinically applicable, reproducible subregions and standardized T2 parameters must be defined. This study sought to: (1) define clinically relevant subregions of knee cartilage using bone landmarks identifiable on both MR images and during arthroscopy and (2) determine healthy T2 values and T2 texture parameters within these subregions. Twenty-five asymptomatic volunteers (age 18-35) were evaluated with a sagittal T2 mapping sequence. Manual segmentation was performed by three raters, and cartilage was divided into twenty-one subregions modified from the International Cartilage Repair Society Articular Cartilage Mapping System. Mean T2 values and texture parameters (entropy, variance, contrast, homogeneity) were recorded for each subregion, and inter-rater and intra-rater reliability was assessed. The central regions of the condyles had significantly higher T2 values than the posterior regions (P < 0.05) and higher variance than the posterior region on the medial side (P < 0.001). The central trochlea had significantly greater T2 values than the anterior and posterior condyles. The central lateral plateau had lower T2 values, lower variance, higher homogeneity, and lower contrast than nearly all subregions in the tibia. The central patellar regions had higher entropy than the superior and inferior regions (each P ≤ 0.001). Repeatability was good to excellent for all subregions. Significant differences in mean T2 values and texture parameters were found between subregions in this carefully selected asymptomatic population, which suggest that there is normal variation of T2 values within the knee joint. The clinically relevant subregions were found to be robust as demonstrated by the overall high repeatability.

  18. Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques

    NASA Astrophysics Data System (ADS)

    Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.

    2017-08-01

    In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.

  19. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  20. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  1. Laser confocal microscope for analysis of 3013 inner container closure weld region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rodriguez, M. J.

    As part of the protocol to investigate the corrosion in the inner container closure weld region (ICCWR) a laser confocal microscope (LCM) was used to perform close visual examination of the surface and measurements of corrosion features on the surface. However, initial analysis of selected destructively evaluated (DE) containers using the LCM revealed several challenges for acquiring, processing and interpreting the data. These challenges include topography of the ICCWR sample, surface features, and the amount of surface area for collecting data at high magnification conditions. In FY17, the LCM parameters were investigated to identify the appropriate parameter values for datamore » acquisition and identification of regions of interest. Using these parameter values, selected DE containers were analyzed to determine the extent of the ICCWR to be examined.« less

  2. Estimating procedure times for surgeries by determining location parameters for the lognormal model.

    PubMed

    Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H

    2004-05-01

    We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.

  3. Development of a predictive model for lead, cadmium and fluorine soil-water partition coefficients using sparse multiple linear regression analysis.

    PubMed

    Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi

    2017-11-01

    In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Simulation Model for Studying Effects of Pollution and Freshwater Inflow on Secondary Productivity in an Ecosystem. Ph.D. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Johnson, R. W.

    1974-01-01

    A mathematical model of an ecosystem is developed. Secondary productivity is evaluated in terms of man related and controllable factors. Information from an existing physical parameters model is used as well as pertinent biological measurements. Predictive information of value to estuarine management is presented. Biological, chemical, and physical parameters measured in order to develop models of ecosystems are identified.

  5. Modeling the bidirectional reflectance distribution function of mixed finite plant canopies and soil

    NASA Technical Reports Server (NTRS)

    Schluessel, G.; Dickinson, R. E.; Privette, J. L.; Emery, W. J.; Kokaly, R.

    1994-01-01

    An analytical model of the bidirectional reflectance for optically semi-infinite plant canopies has been extended to describe the reflectance of finite depth canopies contributions from the underlying soil. The model depends on 10 independent parameters describing vegetation and soil optical and structural properties. The model is inverted with a nonlinear minimization routine using directional reflectance data for lawn (leaf area index (LAI) is equal to 9.9), soybeans (LAI, 2.9) and simulated reflectance data (LAI, 1.0) from a numerical bidirectional reflectance distribution function (BRDF) model (Myneni et al., 1988). While the ten-parameter model results in relatively low rms differences for the BRDF, most of the retrieved parameters exhibit poor stability. The most stable parameter was the single-scattering albedo of the vegetation. Canopy albedo could be derived with an accuracy of less than 5% relative error in the visible and less than 1% in the near-infrared. Sensitivity were performed to determine which of the 10 parameters were most important and to assess the effects of Gaussian noise on the parameter retrievals. Out of the 10 parameters, three were identified which described most of the BRDF variability. At low LAI values the most influential parameters were the single-scattering albedos (both soil and vegetation) and LAI, while at higher LAI values (greater than 2.5) these shifted to the two scattering phase function parameters for vegetation and the single-scattering albedo of the vegetation. The three-parameter model, formed by fixing the seven least significant parameters, gave higher rms values but was less sensitive to noise in the BRDF than the full ten-parameter model. A full hemispherical reflectance data set for lawn was then interpolated to yield BRDF values corresponding to advanced very high resolution radiometer (AVHRR) scan geometries collected over a period of nine days. The resulting parameters and BRDFs are similar to those for the full sampling geometry, suggesting that the limited geometry of AVHRR measurements might be used to reliably retrieve BRDF and canopy albedo with this model.

  6. Advanced Electrocardiography Can Identify Occult Cardiomyopathy in Doberman Pinschers

    NASA Technical Reports Server (NTRS)

    Spiljak, M.; Petric, A. Domanjko; Wilberg, M.; Olsen, L. H.; Stepancic, A.; Schlegel, T. T.; Starc, V.

    2011-01-01

    Recently, multiple advanced resting electrocardiographic (A-ECG) techniques have improved the diagnostic value of short-duration ECG in detection of dilated cardiomyopathy (DCM) in humans. This study investigated whether 12-lead A-ECG recordings could accurately identify the occult phase of DCM in dogs. Short-duration (3-5 min) high-fidelity 12-lead ECG recordings were obtained from 31 privately-owned, clinically healthy Doberman Pinschers (5.4 +/- 1.7 years, 11/20 males/females). Dogs were divided into 2 groups: 1) 19 healthy dogs with normal echocardiographic M-mode measurements: left ventricular internal diameter in diastole (LVIDd . 47mm) and in systole (LVIDs . 38mm) and normal 24-hour ECG recordings (<50 ventricular premature complexes, VPCs); and 2) 12 dogs with occult DCM: 11/12 dogs had increased M-mode measurements (LVIDd . 49mm and/or LVIDs . 40mm) and 5/11 dogs had also >100 VPCs/24h; 1/12 dogs had only abnormal 24-hour ECG recordings (>100 VPCs/24h). ECG recordings were evaluated via custom software programs to calculate multiple parameters of high-frequency (HF) QRS ECG, heart rate variability, QT variability, waveform complexity and 3-D ECG. Student's t-tests determined 19 ECG parameters that were significantly different (P < 0.05) between groups. Principal component factor analysis identified a 5-factor model with 81.4% explained variance. QRS dipolar and non-dipolar voltages, Cornell voltage criteria and QRS waveform residuum were increased significantly (P < 0.05), whereas mean HF QRS amplitude was decreased significantly (P < 0.05) in dogs with occult DCM. For the 5 selected parameters the prediction of occult DCM was performed using a binary logistic regression model with Chi-square tested significance (P < 0.01). ROC analyses showed that the five selected ECG parameters could identify occult ECG with sensitivity 89% and specificity 83%. Results suggest that 12-lead A-ECG might improve diagnostic value of short-duration ECG in earlier detection of canine DCM as five selected ECG parameters can with reasonable accuracy identify occult DCM in Doberman Pinschers. Future extensive clinical studies need to clarify if 12-lead A-ECG could be useful as an additional screening test for canine DCM.

  7. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.

  8. Imposing constraints on parameter values of a conceptual hydrological model using baseflow response

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.

    Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.

  9. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  10. Application of Differential Evolutionary Optimization Methodology for Parameter Structure Identification in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.

    2013-12-01

    With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.

  11. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  12. Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations

    NASA Technical Reports Server (NTRS)

    Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.

    2005-01-01

    A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.

  13. Histogram analysis parameters of apparent diffusion coefficient reflect tumor cellularity and proliferation activity in head and neck squamous cell carcinoma

    PubMed Central

    Winter, Karsten; Richter, Cindy; Hoehn, Anna-Kathrin

    2018-01-01

    Our purpose was to analyze associations between apparent diffusion coefficient (ADC) histogram analysis parameters and histopathologicalfeatures in head and neck squamous cell carcinoma (HNSCC). The study involved 32 patients with primary HNSCC. For every tumor, the following histogram analysis parameters were calculated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, P10, P25, P75, P90, kurtosis, skewness, and entropy. Furthermore, proliferation index KI 67, cell count, total and average nucleic areas were estimated. Spearman's correlation coefficient (p) was used to analyze associations between investigated parameters. In overall sample, all ADC values showed moderate inverse correlations with KI 67. All ADC values except ADCmax correlated inversely with tumor cellularity. Slightly correlations were identified between total/average nucleic area and ADCmean, ADCmin, ADCmedian, and P25. In G1/2 tumors, only ADCmode correlated well with Ki67. No statistically significant correlations between ADC parameters and cellularity were found. In G3 tumors, Ki 67 correlated with all ADC parameters except ADCmode. Cell count correlated well with all ADC parameters except ADCmax. Total nucleic area correlated inversely with ADCmean, ADCmin, ADCmedian, P25, and P90. ADC histogram parameters reflect proliferation potential and cellularity in HNSCC. The associations between histopathology and imaging depend on tumor grading. PMID:29805759

  14. Early variations of laboratory parameters predicting shunt-dependent hydrocephalus after subarachnoid hemorrhage.

    PubMed

    Na, Min Kyun; Won, Yu Deok; Kim, Choong Hyun; Kim, Jae Min; Cheong, Jin Hwan; Ryu, Je Il; Han, Myung-Hoon

    2017-01-01

    Hydrocephalus is a frequent complication following subarachnoid hemorrhage. Few studies investigated the association between laboratory parameters and shunt-dependent hydrocephalus. This study aimed to investigate the variations of laboratory parameters after subarachnoid hemorrhage. We also attempted to identify predictive laboratory parameters for shunt-dependent hydrocephalus. Multiple imputation was performed to fill the missing laboratory data using Bayesian methods in SPSS. We used univariate and multivariate Cox regression analyses to calculate hazard ratios for shunt-dependent hydrocephalus based on clinical and laboratory factors. The area under the receiver operating characteristic curve was used to determine the laboratory risk values predicting shunt-dependent hydrocephalus. We included 181 participants with a mean age of 54.4 years. Higher sodium (hazard ratio, 1.53; 95% confidence interval, 1.13-2.07; p = 0.005), lower potassium, and higher glucose levels were associated with higher shunt-dependent hydrocephalus. The receiver operating characteristic curve analysis showed that the areas under the curve of sodium, potassium, and glucose were 0.649 (cutoff value, 142.75 mEq/L), 0.609 (cutoff value, 3.04 mmol/L), and 0.664 (cutoff value, 140.51 mg/dL), respectively. Despite the exploratory nature of this study, we found that higher sodium, lower potassium, and higher glucose levels were predictive values for shunt-dependent hydrocephalus from postoperative day (POD) 1 to POD 12-16 after subarachnoid hemorrhage. Strict correction of electrolyte imbalance seems necessary to reduce shunt-dependent hydrocephalus. Further large studies are warranted to confirm our findings.

  15. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  16. Suspension parameter estimation in the frequency domain using a matrix inversion approach

    NASA Astrophysics Data System (ADS)

    Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.

    2011-12-01

    The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.

  17. Identifiability of sorption parameters in stirred flow-through reactor experiments and their identification with a Bayesian approach.

    PubMed

    Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y

    2016-10-01

    This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  19. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  20. Second Harmonic Generation Reveals Subtle Fibrosis Differences in Adult and Pediatric Nonalcoholic Fatty Liver Disease.

    PubMed

    Liu, Feng; Zhao, Jing-Min; Rao, Hui-Ying; Yu, Wei-Miao; Zhang, Wei; Theise, Neil D; Wee, Aileen; Wei, Lai

    2017-11-20

    Investigate subtle fibrosis similarities and differences in adult and pediatric nonalcoholic fatty liver disease (NAFLD) using second harmonic generation (SHG). SHG/two-photon excitation fluorescence imaging quantified 100 collagen parameters and determined qFibrosis values by using the nonalcoholic steatohepatitis (NASH) Clinical Research Network (CRN) scoring system in 62 adult and 36 pediatric NAFLD liver specimens. Six distinct parameters identified differences among the NASH CRN stages with high accuracy (area under the curve, 0835-0.982 vs 0.885-0.981, adult and pediatric). All portal region parameters showed similar changes across early stages 0, 1C, and 2, in both groups. Parameter values decreased in adults with progression from stage 1A/B to 2 in the central vein region. In children, aggregated collagen parameters decreased, but nearly all distributed collagen parameters increased from stage 1A/B to 2. SHG analysis accurately reproduces NASH CRN staging in NAFLD, as well as reveals differences and similarities between adult and pediatric collagen deposition not captured by currently available quantitative methods. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. Influence of temperature variations on the entropy and correlation of the Grey-Level Co-occurrence Matrix from B-Mode images.

    PubMed

    Alvarenga, André V; Teixeira, César A; Ruano, Maria Graça; Pereira, Wagner C A

    2010-02-01

    In this work, the feasibility of texture parameters extracted from B-Mode images were explored in quantifying medium temperature variation. The goal is to understand how parameters obtained from the gray-level content can be used to improve the actual state-of-the-art methods for non-invasive temperature estimation (NITE). B-Mode images were collected from a tissue mimic phantom heated in a water bath. The phantom is a mixture of water, glycerin, agar-agar and graphite powder. This mixture aims to have similar acoustical properties to in vivo muscle. Images from the phantom were collected using an ultrasound system that has a mechanical sector transducer working at 3.5 MHz. Three temperature curves were collected, and variations between 27 and 44 degrees C during 60 min were allowed. Two parameters (correlation and entropy) were determined from Grey-Level Co-occurrence Matrix (GLCM) extracted from image, and then assessed for non-invasive temperature estimation. Entropy values were capable of identifying variations of 2.0 degrees C. Besides, it was possible to quantify variations from normal human body temperature (37 degrees C) to critical values, as 41 degrees C. In contrast, despite correlation parameter values (obtained from GLCM) presented a correlation coefficient of 0.84 with temperature variation, the high dispersion of values limited the temperature assessment.

  2. Laser magnetic resonance in supersonic plasmas - The rotational spectrum of SH(+)

    NASA Technical Reports Server (NTRS)

    Hovde, David C.; Saykally, Richard J.

    1987-01-01

    The rotational spectrum of v = 0 and v = 1X3Sigma(-)SH(+) was measured by laser magnetic resonance. Rotationally cold (Tr = 30 K), vibrationally excited (Tv = 3000 K) ions were generated in a corona excited supersonic expansion. The use of this source to identify ion signals is described. Improved molecular parameters were obtained; term values are presented from which astrophysically important transitions may be calculated. Accurate hyperfine parameters for both vibrational levels were determined and the vibrational dependence of the Fermi contact interaction was resolved. The hyperfine parameters agree well with recent many-body perturbation theory calculations.

  3. Estimation of Inertial Parameters of Rigid Body Links of Manipulators.

    DTIC Science & Technology

    1986-02-01

    H AN ET RL. FED 86 UNCLRSSIFIED Al-H-88? NSSI4-8- C -O5OS F/ O 13/13 ML mmmmmmmmuhmhEMENOMONEE 1248 = . I 2.2. 36I W 11111 1.0 112.0 ~ Lm 11111 1111 25l...good match was obtained between joint [lror uesq’pre;Act om the estimated parameters and the joint torques computed A" rn fu~ S. C b.. .:. Massachusetts...value o , which if not zero indicates that linear combination of parameters, vYO, is identifiable. Since K is a function only of the geometry of the

  4. Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training

    NASA Astrophysics Data System (ADS)

    Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek

    2016-07-01

    This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Mathematical Modelling of the Infusion Test

    NASA Astrophysics Data System (ADS)

    Cieslicki, Krzysztof

    2007-01-01

    The objective of this paper was to improve the well established in clinical practice Marmarou model for intracranial volume-pressure compensation by adding the pulsatile components. It was demonstrated that complicated pulsation and growth in intracranial pressure during infusion test could be successfully modeled by the relatively simple analytical expression derived in this paper. The CSF dynamics were tested in 25 patients with clinical symptoms of hydrocephalus. Basing on the frequency spectrum of the patient's baseline pressure and identified parameters of CSF dynamic, for each patient an "ideal" infusion test curve free from artefacts and slow waves was simulated. The degree of correlation between simulated and real curves obtained from clinical observations gave insight into the adequacy of assumptions of Marmarou model. The proposed method of infusion tests analysis designates more exactly the value of the reference pressure, which is usually treated as a secondary and of uncertain significance. The properly identified value of the reference pressure decides on the degree of pulsation amplitude growth during IT, as well as on the value of elastance coefficient. The artificially generated tests with various pulsation components were also applied to examine the correctness of the used algorithm of identification of the original Marmarou model parameters.

  7. Optimisation of warpage on thin shell plastic part using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.

  8. Evaluation of exposure parameters in plain radiography: a comparative study with European guidelines.

    PubMed

    Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M

    2008-01-01

    Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.

  9. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  10. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  11. How robust are the natural history parameters used in chlamydia transmission dynamic models? A systematic review.

    PubMed

    Davies, Bethan; Anderson, Sarah-Jane; Turner, Katy M E; Ward, Helen

    2014-01-30

    Transmission dynamic models linked to economic analyses often form part of the decision making process when introducing new chlamydia screening interventions. Outputs from these transmission dynamic models can vary depending on the values of the parameters used to describe the infection. Therefore these values can have an important influence on policy and resource allocation. The risk of progression from infection to pelvic inflammatory disease has been extensively studied but the parameters which govern the transmission dynamics are frequently neglected. We conducted a systematic review of transmission dynamic models linked to economic analyses of chlamydia screening interventions to critically assess the source and variability of the proportion of infections that are asymptomatic, the duration of infection and the transmission probability. We identified nine relevant studies in Pubmed, Embase and the Cochrane database. We found that there is a wide variation in their natural history parameters, including an absolute difference in the proportion of asymptomatic infections of 25% in women and 75% in men, a six-fold difference in the duration of asymptomatic infection and a four-fold difference in the per act transmission probability. We consider that much of this variation can be explained by a lack of consensus in the literature. We found that a significant proportion of parameter values were referenced back to the early chlamydia literature, before the introduction of nucleic acid modes of diagnosis and the widespread testing of asymptomatic individuals. In conclusion, authors should use high quality contemporary evidence to inform their parameter values, clearly document their assumptions and make appropriate use of sensitivity analysis. This will help to make models more transparent and increase their utility to policy makers.

  12. Structural identifiability analysis of a cardiovascular system model.

    PubMed

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Examining the Feasibility and Utility of Estimating Partial Expected Value of Perfect Information (via a Nonparametric Approach) as Part of the Reimbursement Decision-Making Process in Ireland: Application to Drugs for Cancer.

    PubMed

    McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal

    2017-11-01

    In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.

  14. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  15. The water retention curve and relative permeability for gas production from hydrate-bearing sediments: pore-network model simulation

    NASA Astrophysics Data System (ADS)

    Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon

    2016-08-01

    The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.

  16. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model wemore » describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.« less

  17. Dimethyl sulfoxide could be a useful probe to evaluate unusual skin angioneurotic reaction and epidermal permeability.

    PubMed

    Chen, Shuang Y; Wang, Xue M; Liu, Yan Q; Gao, Yan R; Liu, Xiao P; Li, Shu Y; Dong, Ya Q

    2014-03-01

    Dimethyl sulfoxide (DMSO) has been suggested as a traditional chemical probe for assessing skin susceptibility and barrier function. The purpose of this study was to determine the role of DMSO test for the evaluation of unusual skin angioneurotic reaction and epidermal permeability. Thirty healthy volunteers were exposed to 98% DMSO on the flexor forearm skin for three exposure durations (5 min, 10 min and 15 min). Clinical visual score and biological physical parameters were obtained. The volunteers were divided into two groups according to the clinical visual scoring. The skin parameters were subsequently analyzed. There was a significant correlation between clinical visual score and biological physical parameters. The skin color parameters (a*, oxyhemoglobin, erythema and melanin index) and blood flow values were significant between two groups regardless of duration of DMSO exposure, and a significant difference between density values could also be detected if we regrouped the volunteers according to the sting-producing score. Our results also suggested there was no correlation between questionnaire score and clinical visual score or other parameters. Application of 98% DMSO for 10 min combined with a* (at 30 min) and blood flow (at 10 min) values could help us to identify persons with a hyper-angionerotic reaction to chemical stimulus. The penetrative activity of DMSO correlated with the thickness of the individual's skin.

  18. Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.

    PubMed

    Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan

    2018-05-01

    The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.

  19. The physiology of spacecraft and space suit atmosphere selection

    NASA Astrophysics Data System (ADS)

    Waligora, J. M.; Horrigan, D. J.; Nicogossian, A.

    The majority of the environmental factors which comprise the spacecraft and space suit environments can be controlled at "Earth normal" values, at optimum values, or at other values decided upon by spacecraft designers. Factors which are considered in arriving at control values and control ranges of these parameters include physiological, engineering, operational cost, and safety considerations. Several of the physiologic considerations, including hypoxia and hyperoxia, hypercapnia, temperature regulation, and decompression sickness are identified and their impact on space craft and space suit atmosphere selection are considered. The past experience in controlling these parameters in U.S. and Soviet spacecraft and space suits and the associated physiological responses are reviewed. Current areas of physiological investigation relating to environmental factors in spacecraft are discussed, particularly decompression sickness which can occur as a result of change in pressure from Earth to spacecraft or spacecraft to space suit. Physiological considerations for long-term lunar or Martian missions will have different impacts on atmosphere selection and may result in the selection of atmospheres different than those currently in use.

  20. Geometric parameter analysis to predetermine optimal radiosurgery technique for the treatment of arteriovenous malformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia

    2005-11-01

    Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less

  1. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.

  2. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  3. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  4. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  5. An expert system for prediction of aquatic toxicity of contaminants

    USGS Publications Warehouse

    Hickey, James P.; Aldridge, Andrew J.; Passino, Dora R. May; Frank, Anthony M.; Hushon, Judith M.

    1990-01-01

    The National Fisheries Research Center-Great Lakes has developed an interactive computer program in muLISP that runs on an IBM-compatible microcomputer and uses a linear solvation energy relationship (LSER) to predict acute toxicity to four representative aquatic species from the detailed structure of an organic molecule. Using the SMILES formalism for a chemical structure, the expert system identifies all structural components and uses a knowledge base of rules based on an LSER to generate four structure-related parameter values. A separate module then relates these values to toxicity. The system is designed for rapid screening of potential chemical hazards before laboratory or field investigations are conducted and can be operated by users with little toxicological background. This is the first expert system based on LSER, relying on the first comprehensive compilation of rules and values for the estimation of LSER parameters.

  6. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Factors which modulate the rates of skeletal muscle mass loss in non-small cell lung cancer patients: a pilot study.

    PubMed

    Atlan, Philippe; Bayar, Mohamed Amine; Lanoy, Emilie; Besse, Benjamin; Planchard, David; Ramon, Jordy; Raynard, Bruno; Antoun, Sami

    2017-11-01

    Advanced non-small cell lung cancer (NSCLC) is associated with weight loss which may reflect skeletal muscle mass (SMM) and/or total adipose tissue (TAT) depletion. This study aimed to describe changes in body composition (BC) parameters and to identify the factors unrelated to the tumor which modulate them. SMM, TAT, and the proportion of SMM to SMM + TAT were assessed with computed tomography. Estimates of each BC parameter at follow-up initiation and across time were derived from a mixed linear model of repeated measurements with a random intercept and a random slope. The same models were used to assess the independent effect of gender, age, body mass index (BMI), and initial values on changes in each BC parameter. Sixty-four patients with stage III or IV NSCLC were reviewed. The mean ± SD decreases in body weight and SMM were respectively 59 ± 3 g/week (P < 0.03) and 7 mm 2 /m 2 /week (P = 0.0003). During follow-up, no changes were identified in TAT nor in muscle density or in the proportion of SMM to SMM + TAT, estimated at 37 ± 2% at baseline. SMM loss was influenced by initial BMI (P < 0.0001) and SMM values (P = 0.0002): the higher the initial BMI or SMM values, the greater the loss observed. Weight loss was greater when the initial weight was heavier (P < 0.0001). Our results demonstrate that SMM wasting in NSCLC is lower when initial SMM and BMI values are low. These exploratory findings after our attempt to better understand the intrinsic factors associated with muscle mass depletion need to be confirmed in larger studies.

  8. Operational modal analysis using SVD of power spectral density transmissibility matrices

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Laier, Jose Elias

    2014-05-01

    This paper proposes the singular value decomposition of power spectrum density transmissibility matrices with different references, (PSDTM-SVD), as an identification method of natural frequencies and mode shapes of a dynamic system subjected to excitations under operational conditions. At the system poles, the rows of the proposed transmissibility matrix converge to the same ratio of amplitudes of vibration modes. As a result, the matrices are linearly dependent on the columns, and their singular values converge to zero. Singular values are used to determine the natural frequencies, and the first left singular vectors are used to estimate mode shapes. A numerical example of the finite element model of a beam subjected to colored noise excitation is analyzed to illustrate the accuracy of the proposed method. Results of the PSDTM-SVD method in the numerical example are compared with obtained using frequency domain decomposition (FDD) and power spectrum density transmissibility (PSDT). It is demonstrated that the proposed method does not depend on the excitation characteristics contrary to the FDD method that assumes white noise excitation, and further reduces the risk to identify extra non-physical poles in comparison to the PSDT method. Furthermore, a case study is performed using data from an operational vibration test of a bridge with a simply supported beam system. The real application of a full-sized bridge has shown that the proposed PSDTM-SVD method is able to identify the operational modal parameter. Operational modal parameters identified by the PSDTM-SVD in the real application agree well those identified by the FDD and PSDT methods.

  9. The use of multiobjective calibration and regional sensitivity analysis in simulating hyporheic exchange

    USGS Publications Warehouse

    Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan

    2012-01-01

    We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.

  10. A comparative study of charge transfer inefficiency value and trap parameter determination techniques making use of an irradiated ESA-Euclid prototype CCD

    NASA Astrophysics Data System (ADS)

    Prod'homme, Thibaut; Verhoeve, P.; Kohley, R.; Short, A.; Boudin, N.

    2014-07-01

    The science objectives of space missions using CCDs to carry out accurate astronomical measurements are put at risk by the radiation-induced increase in charge transfer inefficiency (CTI) that results from trapping sites in the CCD silicon lattice. A variety of techniques are used to obtain CTI values and derive trap parameters, however they often differ in results. To identify and understand these differences, we take advantage of an on-going comprehensive characterisation of an irradiated Euclid prototype CCD including the following techniques: X-ray, trap pumping, flat field extended pixel edge response and first pixel response. We proceed to a comparative analysis of the obtained results.

  11. Lipophilicity of some guaianolides isolated from two endemic subspecies of Amphoricarpos neumayeri (Asteraceae) from Montenegro.

    PubMed

    Atrrog, Abubaker A B; Natić, Maja; Tosti, Tomislav; Milojković-Opsenica, Dusanka; Dordević, Iris; Tesević, Vele; Jadranin, Milka; Milosavljević, Slobodan; Lazić, Milan; Radulović, Sinisa; Tesić, Zivoslav

    2009-03-01

    In this study 10 guaianolide-type sesquiterpene gamma-lactones named amphoricarpolides, isolated from the aerial parts of two endemic subspecies of Amphoricarpos neumayeri (ssp. neumayeri and ssp. murbeckii Bosnjak), were investigated by means of reversed-phase thin-layer chromatography. Methanol-water and tetrahydrofuran-water binary mixtures were used as mobile phase in order to determine lipophilicity parameters R (0) (M) and C(0). Some of the investigated compounds were screened for their cytotoxic activity against HeLa and B16 cells. Chromatographically obtained lipophilicity parameters were correlated with calculated logP values and IC(50) values. Principal component analysis identified the dominant pattern in the chromatographically obtained data. 2008 John Wiley & Sons, Ltd.

  12. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    NASA Technical Reports Server (NTRS)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  13. Reference Values for Cardiac and Aortic Magnetic Resonance Imaging in Healthy, Young Caucasian Adults.

    PubMed

    Eikendal, Anouk L M; Bots, Michiel L; Haaring, Cees; Saam, Tobias; van der Geest, Rob J; Westenberg, Jos J M; den Ruijter, Hester M; Hoefer, Imo E; Leiner, Tim

    2016-01-01

    Reference values for morphological and functional parameters of the cardiovascular system in early life are relevant since they may help to identify young adults who fall outside the physiological range of arterial and cardiac ageing. This study provides age and sex specific reference values for aortic wall characteristics, cardiac function parameters and aortic pulse wave velocity (PWV) in a population-based sample of healthy, young adults using magnetic resonance (MR) imaging. In 131 randomly selected healthy, young adults aged between 25 and 35 years (mean age 31.8 years, 63 men) of the general-population based Atherosclerosis-Monitoring-and-Biomarker-measurements-In-The-YOuNg (AMBITYON) study, descending thoracic aortic dimensions and wall thickness, thoracic aortic PWV and cardiac function parameters were measured using a 3.0T MR-system. Age and sex specific reference values were generated using dedicated software. Differences in reference values between two age groups (25-30 and 30-35 years) and both sexes were tested. Aortic diameters and areas were higher in the older age group (all p<0.007). Moreover, aortic dimensions, left ventricular mass, left and right ventricular volumes and cardiac output were lower in women than in men (all p<0.001). For mean and maximum aortic wall thickness, left and right ejection fraction and aortic PWV we did not observe a significant age or sex effect. This study provides age and sex specific reference values for cardiovascular MR parameters in healthy, young Caucasian adults. These may aid in MR guided pre-clinical identification of young adults who fall outside the physiological range of arterial and cardiac ageing.

  14. The effects of variations in parameters and algorithm choices on calculated radiomics feature values: initial investigations and comparisons to feature variability across CT image acquisition conditions

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael

    2018-02-01

    Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.

  15. Modified method for estimating petroleum source-rock potential using wireline logs, with application to the Kingak Shale, Alaska North Slope

    USGS Publications Warehouse

    Rouse, William A.; Houseknecht, David W.

    2016-02-11

    In 2012, the U.S. Geological Survey completed an assessment of undiscovered, technically recoverable oil and gas resources in three source rocks of the Alaska North Slope, including the lower part of the Jurassic to Lower Cretaceous Kingak Shale. In order to identify organic shale potential in the absence of a robust geochemical dataset from the lower Kingak Shale, we introduce two quantitative parameters, $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$, estimated from wireline logs from exploration wells and based in part on the commonly used delta-log resistivity ($\\Delta \\text{ }log\\text{ }R$) technique. Calculation of $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ is intended to produce objective parameters that may be proportional to the quality and volume, respectively, of potential source rocks penetrated by a well and can be used as mapping parameters to convey the spatial distribution of source-rock potential. Both the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters show increased source-rock potential from north to south across the North Slope, with the largest values at the toe of clinoforms in the lower Kingak Shale. Because thermal maturity is not considered in the calculation of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$, total organic carbon values for individual wells cannot be calculated on the basis of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$ alone. Therefore, the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters should be viewed as first-step reconnaissance tools for identifying source-rock potential.

  16. A theoretical investigation of chirp insonification of ultrasound contrast agents.

    PubMed

    Barlow, Euan; Mulholland, Anthony J; Gachagan, Anthony; Nordon, Alison

    2011-08-01

    A theoretical investigation of second harmonic imaging of an Ultrasound Contrast Agent (UCA) under chirp insonification is considered. By solving the UCA's dynamical equation analytically, the effect that the chirp signal parameters and the UCA shell parameters have on the amplitude of the second harmonic frequency are examined. This allows optimal parameter values to be identified which maximise the UCA's second harmonic response. A relationship is found for the chirp parameters which ensures that a signal can be designed to resonate a UCA for a given set of shell parameters. It is also shown that the shell thickness, shell viscosity and shell elasticity parameter should be as small as realistically possible in order to maximise the second harmonic amplitude. Keller-Herring, Second Harmonic, Chirp, Ultrasound Contrast Agent. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.

    NASA Astrophysics Data System (ADS)

    Le, Loc Xuan

    1987-09-01

    A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.

  18. Mass hierarchy and energy scaling of the Tsallis - Pareto parameters in hadron productions at RHIC and LHC energies

    NASA Astrophysics Data System (ADS)

    Bíró, Gábor; Barnaföldi, Gergely Gábor; Biró, Tamás Sándor; Shen, Keming

    2018-02-01

    The latest, high-accuracy identified hadron spectra measurements in highenergy nuclear collisions led us to the investigation of the strongly interacting particles and collective effects in small systems. Since microscopical processes result in a statistical Tsallis - Pareto distribution, the fit parameters q and T are well suited for identifying system size scalings and initial conditions. Moreover, parameter values provide information on the deviation from the extensive, Boltzmann - Gibbs statistics in finite-volumes. We apply here the fit procedure developed in our earlier study for proton-proton collisions [1, 2]. The observed mass and center-of-mass energy trends in the hadron production are compared to RHIC dAu and LHC pPb data in different centrality/multiplicity classes. Here we present new results on mass hierarchy in pp and pA from light to heavy hadrons.

  19. Early variations of laboratory parameters predicting shunt-dependent hydrocephalus after subarachnoid hemorrhage

    PubMed Central

    Kim, Choong Hyun; Kim, Jae Min; Cheong, Jin Hwan; Ryu, Je il

    2017-01-01

    Background and purpose Hydrocephalus is a frequent complication following subarachnoid hemorrhage. Few studies investigated the association between laboratory parameters and shunt-dependent hydrocephalus. This study aimed to investigate the variations of laboratory parameters after subarachnoid hemorrhage. We also attempted to identify predictive laboratory parameters for shunt-dependent hydrocephalus. Methods Multiple imputation was performed to fill the missing laboratory data using Bayesian methods in SPSS. We used univariate and multivariate Cox regression analyses to calculate hazard ratios for shunt-dependent hydrocephalus based on clinical and laboratory factors. The area under the receiver operating characteristic curve was used to determine the laboratory risk values predicting shunt-dependent hydrocephalus. Results We included 181 participants with a mean age of 54.4 years. Higher sodium (hazard ratio, 1.53; 95% confidence interval, 1.13–2.07; p = 0.005), lower potassium, and higher glucose levels were associated with higher shunt-dependent hydrocephalus. The receiver operating characteristic curve analysis showed that the areas under the curve of sodium, potassium, and glucose were 0.649 (cutoff value, 142.75 mEq/L), 0.609 (cutoff value, 3.04 mmol/L), and 0.664 (cutoff value, 140.51 mg/dL), respectively. Conclusions Despite the exploratory nature of this study, we found that higher sodium, lower potassium, and higher glucose levels were predictive values for shunt-dependent hydrocephalus from postoperative day (POD) 1 to POD 12–16 after subarachnoid hemorrhage. Strict correction of electrolyte imbalance seems necessary to reduce shunt-dependent hydrocephalus. Further large studies are warranted to confirm our findings. PMID:29232410

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, Camille E.; Gutti, VeeraRajesh; Bosch, Walter

    Purpose: To quantify the potential impact of the Integrating the Healthcare Enterprise–Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. Methods and Materials: An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified.more » Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. Results: The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for “Beam Meterset” was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). Conclusion: This analysis quantifies the value of the Integrating the Healthcare Enterprise–Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events.« less

  1. Apparent diffusion coefficient (ADC) does not correlate with different serological parameters in myositis and myopathy.

    PubMed

    Meyer, Hans-Jonas; Ziemann, Oliver; Kornhuber, Malte; Emmer, Alexander; Quäschling, Ulf; Schob, Stefan; Surov, Alexey

    2018-06-01

    Background Magnetic resonance imaging (MRI) is widely used in several muscle disorders. Diffusion-weighted imaging (DWI) is an imaging modality, which can reflect microstructural tissue composition. The apparent diffusion coefficient (ADC) is used to quantify the random motion of water molecules in tissue. Purpose To investigate ADC values in patients with myositis and non-inflammatory myopathy and to analyze possible associations between ADC and laboratory parameters in these patients. Material and Methods Overall, 17 patients with several myositis entities, eight patients with non-inflammatory myopathies, and nine patients without muscle disorder as a control group were included in the study (mean age = 55.3 ± 14.3 years). The diagnosis was confirmed by histopathology in every case. DWI was obtained in a 1.5-T scanner using two b-values: 0 and 1000 s/mm 2 . In all patients, the blood sample was acquired within three days to the MRI. The following serological parameters were estimated: C-reactive protein, lactate dehydrogenase, alanine aminotransferase, aspartate aminotransferase, creatine kinase, and myoglobine. Results The estimated mean ADC value for the myositis group was 1.89 ± 0.37 × 10 -3  mm 2 /s and for the non-inflammatory myopathy group was 1.79 ± 0.33 × 10 -3  mm 2 /s, respectively. The mean ADC values (1.15 ± 0.37 × 10 -3  mm 2 /s) were significantly higher to unaffected muscles (vs. myositis P = 0.0002 and vs. myopathy P = 0.0021). There were no significant correlations between serological parameters and ADC values. Conclusion Affected muscles showed statistically significantly higher ADC values than normal muscles. No linear correlations between ADC and serological parameters were identified.

  2. On the precise determination of the Tsallis parameters in proton–proton collisions at LHC energies

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, T.; Cleymans, J.; Marques, L.; Mogliacci, S.; Paradza, M. W.

    2018-05-01

    A detailed analysis is presented of the precise values of the Tsallis parameters obtained in p–p collisions for identified particles, pions, kaons and protons at the LHC at three beam energies \\sqrt{s}=0.9,2.76 and 7 TeV. Interpolated data at \\sqrt{s}=5.02 TeV have also been included. It is shown that the Tsallis formula provides reasonably good fits to the p T distributions in p–p collisions at the LHC using three parameters dN/dy, T and q. However, the parameters T and q depend on the particle species and are different for pions, kaons and protons. As a consequence there is no m T scaling and also no universality of the parameters for different particle species.

  3. Histogram analysis parameters of apparent diffusion coefficient reflect tumor cellularity and proliferation activity in head and neck squamous cell carcinoma.

    PubMed

    Surov, Alexey; Meyer, Hans Jonas; Winter, Karsten; Richter, Cindy; Hoehn, Anna-Kathrin

    2018-05-04

    Our purpose was to analyze associations between apparent diffusion coefficient (ADC) histogram analysis parameters and histopathologicalfeatures in head and neck squamous cell carcinoma (HNSCC). The study involved 32 patients with primary HNSCC. For every tumor, the following histogram analysis parameters were calculated: ADCmean, ADCmax, ADC min , ADC median , ADC mode , P10, P25, P75, P90, kurtosis, skewness, and entropy. Furthermore, proliferation index KI 67, cell count, total and average nucleic areas were estimated. Spearman's correlation coefficient (p) was used to analyze associations between investigated parameters. In overall sample, all ADC values showed moderate inverse correlations with KI 67. All ADC values except ADCmax correlated inversely with tumor cellularity. Slightly correlations were identified between total/average nucleic area and ADC mean , ADC min , ADC median , and P25. In G1/2 tumors, only ADCmode correlated well with Ki67. No statistically significant correlations between ADC parameters and cellularity were found. In G3 tumors, Ki 67 correlated with all ADC parameters except ADCmode. Cell count correlated well with all ADC parameters except ADCmax. Total nucleic area correlated inversely with ADC mean , ADC min , ADC median , P25, and P90. ADC histogram parameters reflect proliferation potential and cellularity in HNSCC. The associations between histopathology and imaging depend on tumor grading.

  4. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae).

    PubMed

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a 'delay phase', an 'increase phase' and a 'stagnation phase' were identified. In general, the runs of succession could be described by four different parameters: (1) 'Initial degradation level', (2) 'delay', (3) 'increase rate' and (4) 'recovery level'. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession.

  5. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  6. Physical characteristics and resistance parameters of typical urban cyclists.

    PubMed

    Tengattini, Simone; Bigazzi, Alexander York

    2018-03-30

    This study investigates the rolling and drag resistance parameters and bicycle and cargo masses of typical urban cyclists. These factors are important for modelling of cyclist speed, power and energy expenditure, with applications including exercise performance, health and safety assessments and transportation network analysis. However, representative values for diverse urban travellers have not been established. Resistance parameters were measured utilizing a field coast-down test for 557 intercepted cyclists in Vancouver, Canada. Masses were also measured, along with other bicycle attributes such as tire pressure and size. The average (standard deviation) of coefficient of rolling resistance, effective frontal area, bicycle plus cargo mass, and bicycle-only mass were 0.0077 (0.0036), 0.559 (0.170) m 2 , 18.3 (4.1) kg, and 13.7 (3.3) kg, respectively. The range of measured values is wider and higher than suggested in existing literature, which focusses on sport cyclists. Significant correlations are identified between resistance parameters and rider and bicycle attributes, indicating higher resistance parameters for less sport-oriented cyclists. The findings of this study are important for appropriately characterising the full range of urban cyclists, including commuters and casual riders.

  7. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    PubMed

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  8. Physico-chemical characterisation of material fractions in household waste: Overview of data in literature.

    PubMed

    Götze, Ramona; Boldrin, Alessio; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2016-03-01

    State-of-the-art environmental assessment of waste management systems rely on data for the physico-chemical composition of individual material fractions comprising the waste in question. To derive the necessary inventory data for different scopes and systems, literature data from different sources and backgrounds are consulted and combined. This study provides an overview of physico-chemical waste characterisation data for individual waste material fractions available in literature and thereby aims to support the selection of data fitting to a specific scope and the selection of uncertainty ranges related to the data selection from literature. Overall, 97 publications were reviewed with respect to employed characterisation method, regional origin of the waste, number of investigated parameters and material fractions and other qualitative aspects. Descriptive statistical analysis of the reported physico-chemical waste composition data was performed to derive value ranges and data distributions for element concentrations (e.g. Cd content) and physical parameters (e.g. heating value). Based on 11,886 individual data entries, median values and percentiles for 47 parameters in 11 individual waste fractions are presented. Exceptional values and publications are identified and discussed. Detailed datasets are attached to this study, allowing further analysis and new applications of the data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Pappa, R. S.

    1985-01-01

    A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.

  10. Hypothesis-driven classification of materials using nuclear magnetic resonance relaxometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espy, Michelle A.; Matlashov, Andrei N.; Schultz, Larry J.

    Technologies related to identification of a substance in an optimized manner are provided. A reference group of known materials is identified. Each known material has known values for several classification parameters. The classification parameters comprise at least one of T.sub.1, T.sub.2, T.sub.1.rho., a relative nuclear susceptibility (RNS) of the substance, and an x-ray linear attenuation coefficient (LAC) of the substance. A measurement sequence is optimized based on at least one of a measurement cost of each of the classification parameters and an initial probability of each of the known materials in the reference group.

  11. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  12. VizieR Online Data Catalog: Effects of preionization in radiative shocks (Sutherland+, 2017)

    NASA Astrophysics Data System (ADS)

    Sutherland, R. S.; Dopita, M. A.

    2017-06-01

    In this paper we treat the preionization problem in shocks over the velocity range 10

  13. Issues in the inverse modeling of a soil infiltration process

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal; Jacka, Lukas; Leps, Matej

    2017-04-01

    This contribution addresses issues in evaluation of the soil hydraulic parameters (SHP) from the Richards equation based inverse model. The inverse model was representing single ring infiltration experiment on mountainous podzolic soil profile, and was searching for the SHP parameters of the top soil layer. Since the thickness of the top soil layer is often much lower than the depth required to embed the single ring or Guelph permeameter device, the SHPs for the top soil layer are very difficult to measure directly. The SHPs for the top soil layer were therefore identified here by inverse modeling of the single ring infiltration process, where, especially, the initial unsteady part of the experiment is expected to provide very useful data for evaluating the retention curve parameters (excluding the residual water content) and the saturated hydraulic conductivity. The main issue, which is addressed in this contribution, is the uniqueness of the Richards equation inverse model. We tried to answer the question whether is it possible to characterize the unsteady infiltration experiment with a unique set of SHPs values, and whether are all SHP parameters vulnerable with the non-uniqueness. Which is an important issue, since we could further conclude whether the popular gradient methods are appropriate here. Further the issues in assigning the initial and boundary condition setup, the influence of spatial and temporal discretization on the values of the identified SHPs, and the convergence issues with the Richards equation nonlinear operator during automatic calibration procedure are also covered here.

  14. Scale and geometry effects on heat-recirculating combustors

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Hua; Ronney, Paul D.

    2013-10-01

    A simple analysis of linear and spiral counterflow heat-recirculating combustors was conducted to identify the dimensionless parameters expected to quantify the performance of such devices. A three-dimensional (3D) numerical model of spiral counterflow 'Swiss roll' combustors was then used to confirm and extend the applicability of the identified parameters. It was found that without property adjustment to maintain constant values of these parameters, at low Reynolds number (Re) smaller-scale combustors actually showed better performance (in terms of having lower lean extinction limits at the same Re) due to lower heat loss and internal wall-to-wall radiation effects, whereas at high Re, larger-scale combustors showed better performance due to longer residence time relative to chemical reaction time. By adjustment of property values, it was confirmed that four dimensionless parameters were sufficient to characterise combustor performance at all scales: Re, a heat loss coefficient (α), a Damköhler number (Da) and a radiative transfer number (R). The effect of diffusive transport effect (i.e. Lewis number) was found to be significant only at low Re. Substantial differences were found between the performance of linear and spiral combustors; these were explained in terms of the effects of the area exposed to heat loss to ambient and the sometimes detrimental effect of increasing heat transfer to adjacent outlet turns of the spiral exchanger. These results provide insight into the optimal design of small-scale combustors and choice of operation conditions.

  15. Assessing the Value of Biosimilars: A Review of the Role of Budget Impact Analysis.

    PubMed

    Simoens, Steven; Jacobs, Ira; Popovian, Robert; Isakov, Leah; Shane, Lesley G

    2017-10-01

    Biosimilar drugs are highly similar to an originator (reference) biologic, with no clinically meaningful differences in terms of safety or efficacy. As biosimilars offer the potential for lower acquisition costs versus the originator biologic, evaluating the economic implications of the introduction of biosimilars is of interest. Budget impact analysis (BIA) is a commonly used methodology. This review of published BIAs of biosimilar fusion proteins and/or monoclonal antibodies identified 12 unique publications (three full papers and nine congress posters). When evaluated alongside professional guidance on conducting BIA, the majority of BIAs identified were generally in line with international recommendations. However, a lack of peer-reviewed journal articles and considerable shortcomings in the publications were identified. Deficiencies included a limited range of cost parameters, a reliance on assumptions for parameters such as uptake and drug pricing, a lack of expert validation, and a limited range of sensitivity analyses that were based on arbitrary ranges. The rationale for the methods employed, limitations of the BIA approach, and instructions for local adaptation often were inadequately discussed. To understand fully the potential economic impact and value of biosimilars, the impact of biosimilar supply, manufacturer-provided supporting services, and price competition should be included in BIAs. Alternative approaches, such as cost minimization, which requires evidence demonstrating similarity to the originator biologic, and those that integrate a range of economic assessment methods, are needed to assess the value of biosimilars.

  16. Real time identification of the internal combustion engine combustion parameters based on the vibration velocity signal

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuliang; Cheng, Yong; Wang, Limei; Ji, Shaobo

    2017-03-01

    Accurate combustion parameters are the foundations of effective closed-loop control of engine combustion process. Some combustion parameters, including the start of combustion, the location of peak pressure, the maximum pressure rise rate and its location, can be identified from the engine block vibration signals. These signals often include non-combustion related contributions, which limit the prompt acquisition of the combustion parameters computationally. The main component in these non-combustion related contributions is considered to be caused by the reciprocating inertia force excitation (RIFE) of engine crank train. A mathematical model is established to describe the response of the RIFE. The parameters of the model are recognized with a pattern recognition algorithm, and the response of the RIFE is predicted and then the related contributions are removed from the measured vibration velocity signals. The combustion parameters are extracted from the feature points of the renovated vibration velocity signals. There are angle deviations between the feature points in the vibration velocity signals and those in the cylinder pressure signals. For the start of combustion, a system bias is adopted to correct the deviation and the error bound of the predicted parameters is within 1.1°. To predict the location of the maximum pressure rise rate and the location of the peak pressure, algorithms based on the proportion of high frequency components in the vibration velocity signals are introduced. Tests results show that the two parameters are able to be predicted within 0.7° and 0.8° error bound respectively. The increase from the knee point preceding the peak value point to the peak value in the vibration velocity signals is used to predict the value of the maximum pressure rise rate. Finally, a monitoring frame work is inferred to realize the combustion parameters prediction. Satisfactory prediction for combustion parameters in successive cycles is achieved, which validates the proposed methods.

  17. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  18. Method for computing self-consistent solution in a gun code

    DOEpatents

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  19. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  20. Identification Code of Interstellar Cloud within IRAF

    NASA Astrophysics Data System (ADS)

    Lee, Youngung; Jung, Jae Hoon; Kim, Hyun-Goo

    1997-12-01

    We present a code which identifies individual clouds in crowded region using IMFORT interface within Image Reduction and Analysis Facility(IRAF). We define a cloud as an object composed of all pixels in longitude, latitude, and velocity that are simply connected and that lie above some threshold temperature. The code searches the whole pixels of the data cube in efficient way to isolate individual clouds. Along with identification of clouds it is designed to estimate their mean values of longitudes, latitudes, and velocities. In addition, a function of generating individual images(or cube data) of identified clouds is added up. We also present identified individual clouds using a 12CO survey data cube of Galactic Anticenter Region(Lee et al. 1997) as a test example. We used a threshold temperature of 5 sigma rms noise level of the data. With a higher threshold temperature, we isolated subclouds of a huge cloud identified originally. As the most important parameter to identify clouds is the threshold value, its effect to the size and velocity dispersion is discussed rigorously.

  1. An adaptive control scheme for a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Yang, T. C.; Yang, J. C. S.; Kudva, P.

    1987-01-01

    The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.

  2. Strategies for Efficient Computation of the Expected Value of Partial Perfect Information

    PubMed Central

    Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.

    2014-01-01

    Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434

  3. Can Fetuin-A Be a Marker for Insulin Resistance and Poor Glycemic Control in Children with Type 1 Diabetes Mellitus?

    PubMed

    Şiraz, Ülkü Gül; Doğan, Murat; Hatipoğlu, Nihal; Muhtaroğlu, Sabahattin; Kurtoğlu, Selim

    2017-12-15

    Metabolic impairment in type 1 diabetes mellitus (T1DM) with poor glycemic control causes insulin resistance, non-alcoholic fatty liver disease (NAFLD), atherosclerosis, and increased carotid intima-media thickness (CIMT). Fetuin-A has a protective effect in cardiovascular disorders and is increased in hepatosteatosis. We aimed to investigate the reliability of fetuin-A levels in early detection of diabetic complications in children with T1DM and to identify a cut-off value that may show poor metabolic control. The study included 80 patients who had T1DM for at least 5 years and who had no chronic complications or an auto-immune disorder. Blood samples were drawn to measure hemoglobin A1c (HbA1c), biochemical parameters, and fetuin-A levels. Anthropometric parameters were also measured. Percent body fat was calculated. Hepatosteatosis and CIMT were assessed by sonography. Mean age of the patients was 13.5 years. Grade 1 hepatosteatosis was detected in 10%. Patients were stratified into 2 groups based on presence of NAFLD. Fetuin-A level was increased in patients with NAFLD. We identified a fetuin-A cut-off value (514.28 ng/mL; sensitivity: 47.34; specificity: 96.72) that may predict NAFLD. HbA1c and total cholesterol levels were found to be higher in patients with fetuin-A levels above higher the cut-off value. Fetuin-A is a reliable parameter in the prediction of complications and poor glycemic control in patients with T1DM.

  4. Quality Assurance with Plan Veto: reincarnation of a record and verify system and its potential value.

    PubMed

    Noel, Camille E; Gutti, Veerarajesh; Bosch, Walter; Mutic, Sasa; Ford, Eric; Terezakis, Stephanie; Santanam, Lakshmi

    2014-04-01

    To quantify the potential impact of the Integrating the Healthcare Enterprise-Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified. Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for "Beam Meterset" was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). This analysis quantifies the value of the Integrating the Healthcare Enterprise-Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.

    PubMed

    Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G

    2016-07-26

    The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  7. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  8. Effect of the grain protein content locus Gpc-B1 on bread and pasta quality

    USDA-ARS?s Scientific Manuscript database

    Grain protein concentration (GPC) affects wheat nutritional value and several critical parameters for bread and pasta quality. A gene designated Gpc-B1, which is not functional in common and durum wheat cultivars, was recently identified in Triticum turgidum ssp. dicoccoides. The functional allele o...

  9. Actions, Objectives & Concerns. Human Parameters for Architectural Design.

    ERIC Educational Resources Information Center

    Lasswell, Thomas E.; And Others

    An experiment conducted at California State College, Los Angeles, to test the value of social-psychological research in defining building needs is described. The problems of how to identify and synthesize the disparate objectives, concerns and actions of the groups who use or otherwise have an interest in large and complex buildings is discussed.…

  10. Bayesian Multi-Trait Analysis Reveals a Useful Tool to Increase Oil Concentration and to Decrease Toxicity in Jatropha curcas L.

    PubMed Central

    Silva Junqueira, Vinícius; de Azevedo Peixoto, Leonardo; Galvêas Laviola, Bruno; Lopes Bhering, Leonardo; Mendonça, Simone; Agostini Costa, Tania da Silveira; Antoniassi, Rosemar

    2016-01-01

    The biggest challenge for jatropha breeding is to identify superior genotypes that present high seed yield and seed oil content with reduced toxicity levels. Therefore, the objective of this study was to estimate genetic parameters for three important traits (weight of 100 seed, oil seed content, and phorbol ester concentration), and to select superior genotypes to be used as progenitors in jatropha breeding. Additionally, the genotypic values and the genetic parameters estimated under the Bayesian multi-trait approach were used to evaluate different selection indices scenarios of 179 half-sib families. Three different scenarios and economic weights were considered. It was possible to simultaneously reduce toxicity and increase seed oil content and weight of 100 seed by using index selection based on genotypic value estimated by the Bayesian multi-trait approach. Indeed, we identified two families that present these characteristics by evaluating genetic diversity using the Ward clustering method, which suggested nine homogenous clusters. Future researches must integrate the Bayesian multi-trait methods with realized relationship matrix, aiming to build accurate selection indices models. PMID:27281340

  11. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  12. Structural Analysis of Cubane-Type Iron Clusters

    PubMed Central

    Tan, Lay Ling; Holm, R. H.; Lee, Sonny C.

    2013-01-01

    The generalized cluster type [M4(μ3-Q)4Ln]x contains the cubane-type [M4Q4]z core unit that can approach, but typically deviates from, perfect Td symmetry. The geometric properties of this structure have been analyzed with reference to Td symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M4, Q4) and the total [M4Q4] core (as a tetracapped M4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe4Q4] (Q = S2−, Se2−, Te2−, [NPR3]−, [NR]2−) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information. PMID:24072952

  13. Structural Analysis of Cubane-Type Iron Clusters.

    PubMed

    Tan, Lay Ling; Holm, R H; Lee, Sonny C

    2013-07-13

    The generalized cluster type [M 4 (μ 3 -Q) 4 L n ] x contains the cubane-type [M 4 Q 4 ] z core unit that can approach, but typically deviates from, perfect T d symmetry. The geometric properties of this structure have been analyzed with reference to T d symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M 4 , Q 4 ) and the total [M 4 Q 4 ] core (as a tetracapped M 4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe 4 Q 4 ] (Q = S 2- , Se 2- , Te 2- , [NPR 3 ] - , [NR] 2- ) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information.

  14. On the use of mathematical models to build the design space for the primary drying phase of a pharmaceutical lyophilization process.

    PubMed

    Giordano, Anna; Barresi, Antonello A; Fissore, Davide

    2011-01-01

    The aim of this article is to show a procedure to build the design space for the primary drying of a pharmaceuticals lyophilization process. Mathematical simulation of the process is used to identify the operating conditions that allow preserving product quality and meeting operating constraints posed by the equipment. In fact, product temperature has to be maintained below a limit value throughout the operation, and the sublimation flux has to be lower than the maximum value allowed by the capacity of the condenser, besides avoiding choking flow in the duct connecting the drying chamber to the condenser. Few experimental runs are required to get the values of the parameters of the model: the dynamic parameters estimation algorithm, an advanced tool based on the pressure rise test, is used to this purpose. A simple procedure is proposed to take into account parameters uncertainty and, thus, it is possible to find the recipes that allow fulfilling the process constraints within the required uncertainty range. The same approach can be effective to take into account the heterogeneity of the batch when designing the freeze-drying recipe. Copyright © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  15. A parsimonious characterization of change in global age-specific and total fertility rates

    PubMed Central

    2018-01-01

    This study aims to understand trends in global fertility from 1950-2010 though the analysis of age-specific fertility rates. This approach incorporates both the overall level, as when the total fertility rate is modeled, and different patterns of age-specific fertility to examine the relationship between changes in age-specific fertility and fertility decline. Singular value decomposition is used to capture the variation in age-specific fertility curves while reducing the number of dimensions, allowing curves to be described nearly fully with three parameters. Regional patterns and trends over time are evident in parameter values, suggesting this method provides a useful tool for considering fertility decline globally. The second and third parameters were analyzed using model-based clustering to examine patterns of age-specific fertility over time and place; four clusters were obtained. A country’s demographic transition can be traced through time by membership in the different clusters, and regional patterns in the trajectories through time and with fertility decline are identified. PMID:29377899

  16. Experimental verification of internal parameter in magnetically coupled boost used as PV optimizer in parallel association

    NASA Astrophysics Data System (ADS)

    Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel

    2017-02-01

    This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.

  17. A Novel Degradation Identification Method for Wind Turbine Pitch System

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Dong

    2018-04-01

    It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.

  18. Effect of Age on Tooth Shade, Skin Color and Skin-Tooth Color Interrelationship in Saudi Arabian Subpopulation.

    PubMed

    Haralur, Satheesh B

    2015-08-01

    Dental restoration or prosthesis in harmony with adjacent natural teeth color is indispensable part for the successful esthetic outcome. The studies indicate is existence of correlation between teeth and skin color. Teeth and skin color are changed over the aging process. The aim of the study was to explore the role of age on the tooth and skin color parameters, and to investigate the effect of ageing on teeth-skin color correlation. Total of 225 Saudi Arabian ethnic subjects was divided into three groups of 75 each. The groups were divided according to participant's age. The participant's age for Group I, Group II, and Group III was 18-29 years, 30-50 years, and above 50 years, respectively. The tooth color was identified by spectrophotometer in CIE Lab parameters. The skin color was registered with skin surface photography. The data were statistically analyzed with one-way ANOVA and correlation tests with SPSS 18 software. The Group I had the highest 'L' value of 80.26, Group III recorded the least value of 76.66. The Group III had highest yellow value 'b' at 22.72, while Group I had 19.19. The skin 'L' value was highest in the young population; the elder population had the increased red value 'a' in comparison to younger subjects. The 'L' tooth color parameter had a strong positive linear correlation with skin color in young and adult subjects. While Group III teeth showed the strong positive correlation with 'b' parameter at malar region. The elder subjects had darker and yellow teeth in comparison with younger subjects. The reddening of the skin was observed as age-related skin color change. The age had a strong influence on the teeth-skin color correlation.

  19. Usefulness of Mitral Valve Prosthetic or Bioprosthetic Time Velocity Index Ratio to Detect Prosthetic or Bioprosthetic Mitral Valve Dysfunction.

    PubMed

    Luis, Sushil Allen; Blauwet, Lori A; Samardhi, Himabindu; West, Cathy; Mehta, Ramila A; Luis, Chris R; Scalia, Gregory M; Miller, Fletcher A; Burstow, Darryl J

    2017-10-15

    This study aimed to investigate the utility of transthoracic echocardiographic (TTE) Doppler-derived parameters in detection of mitral prosthetic dysfunction and to define optimal cut-off values for identification of such dysfunction by valve type. In total, 971 TTE studies (647 mechanical prostheses; 324 bioprostheses) were compared with transesophageal echocardiography for evaluation of mitral prosthesis function. Among all prostheses, mitral valve prosthesis (MVP) ratio (ratio of time velocity integral of MVP to that of left ventricular outflow tract; odds ratio [OR] 10.34, 95% confidence interval [95% CI] 6.43 to 16.61, p<0.001), E velocity (OR 3.23, 95% CI 1.61 to 6.47, p<0.001), and mean gradient (OR 1.13, 95% CI 1.02 to 1.25, p=0.02) provided good discrimination of clinically normal and clinically abnormal prostheses. Optimal cut-off values by receiver operating characteristic analysis for differentiating clinically normal and abnormal prostheses varied by prosthesis type. Combining MVP ratio and E velocity improved specificity (92%) and positive predictive value (65%) compared with either parameter alone, with minimal decline in negative predictive value (92%). Pressure halftime (OR 0.99, 95% CI 0.98 to 1.00, p=0.04) did not differentiate between clinically normal and clinically abnormal prostheses but was useful in discriminating obstructed from normal and regurgitant prostheses. In conclusion, cut-off values for TTE-derived Doppler parameters of MVP function were specific to prosthesis type and carried high sensitivity and specificity for identifying prosthetic valve dysfunction. MVP ratio was the best predictor of prosthetic dysfunction and, combined with E velocity, provided a useful parameter for determining likelihood of dysfunction and need for further assessment. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  20. A method and instruments to identify the torque, the power and the efficiency of an internal combustion engine of a wheeled vehicle

    NASA Astrophysics Data System (ADS)

    Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.

    2018-01-01

    In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.

  1. Heat budget observations for the FIRE/SRB Wisconsin experiment region from October 9 through November 2, 1986

    NASA Technical Reports Server (NTRS)

    Whitlock, Charles H.; Lecroy, Stuart R.

    1987-01-01

    A map and concise tables are presented which show locations, pixel size, and heat budget products from the NOAA-9 satellite for the FIRE/SRB Wisconsin experiment region during the period 9 October through 2 November 1986. In addition to the operational standard products, a narrowband albedo parameter is calculated and presented based on values from AVHRR band 1. This parameter is useful in identifying and/or quantifying clouds on a global basis using a polar-stereographic grid system.

  2. Hidden symmetries of the extended Kitaev-Heisenberg model: Implications for the honeycomb-lattice iridates A2IrO3

    NASA Astrophysics Data System (ADS)

    Chaloupka, Jiří; Khaliullin, Giniyat

    2015-07-01

    We have explored the hidden symmetries of a generic four-parameter nearest-neighbor spin model, allowed in honeycomb-lattice compounds under trigonal compression. Our method utilizes a systematic algorithm to identify all dual transformations of the model that map the Hamiltonian on itself, changing the parameters and providing exact links between different points in its parameter space. We have found the complete set of points of hidden SU(2) symmetry at which a seemingly highly anisotropic model can be mapped back on the Heisenberg model and inherits therefore its properties such as the presence of gapless Goldstone modes. The procedure used to search for the hidden symmetries is quite general and may be extended to other bond-anisotropic spin models and other lattices, such as the triangular, kagome, hyperhoneycomb, or harmonic-honeycomb lattices. We apply our findings to the honeycomb-lattice iridates Na2IrO3 and Li2IrO3 , and illustrate how they help to identify plausible values of the model parameters that are compatible with the available experimental data.

  3. LETTER TO THE EDITOR: On the relations between the zero-field splitting parameters in the extended Stevens operator notation and the conventional ones used in EMR for orthorhombic and lower symmetry

    NASA Astrophysics Data System (ADS)

    Rudowicz, C.

    2000-06-01

    Electron magnetic resonance (EMR) studies of paramagnetic species with the spin S ≥ 1 at orthorhombic symmetry sites require an axial zero-field splitting (ZFS) parameter and a rhombic one of the second order (k = 2), whereas at triclinic sites all five ZFS (k = 2) parameters are expressed in the crystallographic axis system. For the spin S ≥ 2 also the higher-order ZFS terms must be considered. In the principal axis system, instead of the five ZFS (k = 2) parameters, the two principal ZFS values can be used, as for orthorhombic symmetry; however, then the orientation of the principal axes with respect to the crystallographic axis system must be provided. Recently three serious cases of incorrect relations between the extended Stevens ZFS parameters and the conventional ones have been identified in the literature. The first case concerns a controversy concerning the second-order rhombic ZFS parameters and was found to have lead to misinterpretation, in a review article, of several values of either E or b22 published earlier. The second case concerns the set of five relations between the extended Stevens ZFS parameters bkq and the conventional ones Dij for triclinic symmetry, four of which turn out to be incorrect. The third case concerns the omission of the scaling factors fk for the extended Stevens ZFS parameters bkq. In all cases the incorrect relations in question have been published in spite of the earlier existence of the correct relations in the literature. The incorrect relations are likely to lead to further misinterpretation of the published values of the ZFS parameters for orthorhombic and lower symmetry. The purpose of this paper is to make the spectroscopists working in the area of EMR (including EPR and ESR) and related spectroscopies aware of the problem and to reduce proliferation of the incorrect relations.

  4. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  5. Modelling duodenum radiotherapy toxicity using cohort dose-volume-histogram data.

    PubMed

    Holyoake, Daniel L P; Aznar, Marianne; Mukherjee, Somnath; Partridge, Mike; Hawkins, Maria A

    2017-06-01

    Gastro-intestinal toxicity is dose-limiting in abdominal radiotherapy and correlated with duodenum dose-volume parameters. We aimed to derive updated NTCP model parameters using published data and prospective radiotherapy quality-assured cohort data. A systematic search identified publications providing duodenum dose-volume histogram (DVH) statistics for clinical studies of conventionally-fractionated radiotherapy. Values for the Lyman-Kutcher-Burman (LKB) NTCP model were derived through sum-squared-error minimisation and using leave-one-out cross-validation. Data were corrected for fraction size and weighted according to patient numbers, and the model refined using individual patient DVH data for two further cohorts from prospective clinical trials. Six studies with published DVH data were utilised, and with individual patient data included outcomes for 531 patients in total (median follow-up 16months). Observed gastro-intestinal toxicity rates ranged from 0% to 14% (median 8%). LKB parameter values for unconstrained fit to published data were: n=0.070, m=0.46, TD 50(1) [Gy]=183.8, while the values for the model incorporating the individual patient data were n=0.193, m=0.51, TD 50(1) [Gy]=299.1. LKB parameters derived using published data are shown to be consistent to those previously obtained using individual patient data, supporting a small volume-effect and dependence on exposure to high threshold dose. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  7. Characterization of testicular germ cell tumors: Whole-lesion histogram analysis of the apparent diffusion coefficient at 3T.

    PubMed

    Min, Xiangde; Feng, Zhaoyan; Wang, Liang; Cai, Jie; Yan, Xu; Li, Basen; Ke, Zan; Zhang, Peipei; You, Huijuan

    2018-01-01

    To assess the values of parameters derived from whole-lesion histograms of the apparent diffusion coefficient (ADC) at 3T for the characterization of testicular germ cell tumors (TGCTs). A total of 24 men with TGCTs underwent 3T diffusion-weighted imaging. Fourteen tumors were pathologically confirmed as seminomas, and ten tumors were pathologically confirmed as nonseminomas. Whole-lesion histogram analysis of the ADC values was performed. A Mann-Whitney U test was employed to compare the differences in ADC histogram parameters between seminomas and nonseminomas. Receiver operating characteristic analysis was used to identify the cutoff values for each parameter for differentiating seminomas from nonseminomas; furthermore, the area under the curve (AUC) was calculated to evaluate the diagnostic accuracy. The median of 10th, 25th, 50th, 75th, and 90th percentiles and mean, minimum and maximum ADC values were all significantly reduced for seminomas compared with nonseminomas (p<0.05 for all). In contrast, the median of kurtosis and skewness of ADC values of seminomas were both significantly increased compared with those of nonseminomas (p=0.003 and 0.001, respectively). For differentiating nonseminomas from seminomas, the 10th percentile ADC yielded the highest AUC with a sensitivity and specificity of 100% and 92.86%, respectively. Whole-lesion histogram analysis of ADCs might be used for preoperative characterization of TGCTs. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Patient Protection and Affordable Care Act; HHS notice of benefit and payment parameters for 2015. Final rule.

    PubMed

    2014-03-11

    This final rule sets forth payment parameters and oversight provisions related to the risk adjustment, reinsurance, and risk corridors programs; cost sharing parameters and cost-sharing reductions; and user fees for Federally-facilitated Exchanges. It also provides additional standards with respect to composite premiums, privacy and security of personally identifiable information, the annual open enrollment period for 2015, the actuarial value calculator, the annual limitation in cost sharing for stand-alone dental plans, the meaningful difference standard for qualified health plans offered through a Federally-facilitated Exchange, patient safety standards for issuers of qualified health plans, and the Small Business Health Options Program.

  9. Finding Top-kappa Unexplained Activities in Video

    DTIC Science & Technology

    2012-03-09

    parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in

  10. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values.

    PubMed

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-10-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc.

  11. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values

    PubMed Central

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-01-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight–normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. Environ Toxicol Chem 2014;33:2386–2398. PMID:24944000

  12. Renormalization group approach to symmetry protected topological phases

    NASA Astrophysics Data System (ADS)

    van Nieuwenburg, Evert P. L.; Schnyder, Andreas P.; Chen, Wei

    2018-04-01

    A defining feature of a symmetry protected topological phase (SPT) in one dimension is the degeneracy of the Schmidt values for any given bipartition. For the system to go through a topological phase transition separating two SPTs, the Schmidt values must either split or cross at the critical point in order to change their degeneracies. A renormalization group (RG) approach based on this splitting or crossing is proposed, through which we obtain an RG flow that identifies the topological phase transitions in the parameter space. Our approach can be implemented numerically in an efficient manner, for example, using the matrix product state formalism, since only the largest first few Schmidt values need to be calculated with sufficient accuracy. Using several concrete models, we demonstrate that the critical points and fixed points of the RG flow coincide with the maxima and minima of the entanglement entropy, respectively, and the method can serve as a numerically efficient tool to analyze interacting SPTs in the parameter space.

  13. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  14. Assessment of expected breeding values for fertility traits of Murrah buffaloes under subtropical climate.

    PubMed

    Dash, Soumya; Chakravarty, A K; Singh, Avtar; Shivahre, Pushp Raj; Upadhyay, Arpan; Sah, Vaishali; Singh, K Mahesh

    2015-03-01

    The aim of the present study was to assess the influence of temperature and humidity prevalent under subtropical climate on the breeding values for fertility traits viz. service period (SP), pregnancy rate (PR) and conception rate (CR) of Murrah buffaloes in National Dairy Research Institute (NDRI) herd. Fertility data on 1379 records of 581 Murrah buffaloes spread over four lactations and climatic parameters viz. dry bulb temperature and relative humidity (RH) spanned over 20 years (1993-2012) were collected from NDRI and Central Soil and Salinity Research Institute, Karnal, India. Monthly average temperature humidity index (THI) values were estimated. Threshold THI value affecting fertility traits was identified by fixed least-squares model analysis. Three zones of non-heat stress, heat stress and critical heat stress zones were developed in a year. The genetic parameters heritability (h(2)) and repeatability (r) of each fertility trait were estimated. Genetic evaluation of Murrah buffaloes was performed in each zone with respect to their expected breeding values (EBV) for fertility traits. Effect of THI was found significant (p<0.001) on all fertility traits with threshold THI value identified as 75. Based on THI values, a year was classified into three zones: Non heat stress zone(THI 56.71-73.21), HSZ (THI 75.39-81.60) and critical HSZ (THI 80.27-81.60). The EBVfor SP, PR, CR were estimated as 138.57 days, 0.362 and 69.02% in non-HSZ while in HSZ EBV were found as 139.62 days, 0.358 and 68.81%, respectively. EBV for SP was increased to 140.92 days and for PR and CR, it was declined to 0.357 and 68.71% in critical HSZ. The negative effect of THI was observed on EBV of fertility traits under the non-HSZ and critical HSZ Thus, the influence of THI should be adjusted before estimating the breeding values for fertility traits in Murrah buffaloes.

  15. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  16. Volume and mass distribution in selected families of asteroids

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, I.; Leliwa-Kopystynski, J.

    2014-07-01

    Members of five asteroid families (Vesta, Eos, Eunomia, Koronis, and Themis) were identified using the Hierarchical Clustering Method (HCM) for a data set containing 292,003 numbered asteroids. The influence of the choice of the best value of the parameter v_{cut} that controls the distances of asteroids in the proper elements space a, e, i was investigated with a step as small as 1 m/s. Results are given in a set of figures showing the families on the planes (a, e), (a, i), (e, i). Another form for the presentation of results is related to the secular resonances in the asteroids' motion with the giant planets, mostly with Saturn. Relations among asteroid radius, albedo, and absolute magnitude allow us to calculate the volumes of individual members of an asteroid family. After summation, the volumes of the parent bodies of the families were found. This paper presents the possibility and the first results of using a combined method for asteroid family identifications based on the following items: (i) Parameter v_{cut} is established with precision as high as 1 m/s; (ii) the albedo (if available) of the potential members is considered for approving or rejecting the family membership; (iii) a color classification is used for the same purpose as well. Searching for the most reliable parameter values for the family populations was performed by means of a consecutive application of the HCM with increasing parameter v_{cut}. The results are illustrated in the figure. Increasing v_{cut} in steps as small as 1 m/s allowed to observe the computational strength of the HCM: the critical value of the parameter v_{cut} (see the breaking-points of the plots in the figure) separates the assemblage of potential family members from 'an ocean' of background asteroids that are not related to the family. The critical values of v_{cut} vary from 57 m/s for the Vesta family to 92 m/s for the Eos family. If the parameter v_{cut} surpasses its critical value, the number of HCM-discovered family members increases enormously and without any physical reason.

  17. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae)

    PubMed Central

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Abstract Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a ‘delay phase’, an ‘increase phase’ and a ‘stagnation phase’ were identified. In general, the runs of succession could be described by four different parameters: (1) ‘Initial degradation level’, (2) ‘delay’, (3) ‘increase rate’ and (4) ‘recovery level’. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession. PMID:21738419

  18. The use and misuse of V(c,max) in Earth System Models.

    PubMed

    Rogers, Alistair

    2014-02-01

    Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.

  19. Geospatial Water Quality Analysis of Dilla Town, Gadeo Zone, Ethiopia - A Case Study

    NASA Astrophysics Data System (ADS)

    Pakhale, G. K.; Wakeyo, T. B.

    2015-12-01

    Dilla is a socio-economically important town in Ethiopia, established on the international highway joining capital cities of Ethiopia and Kenya. It serves as an administrative center of the Gedeo Zone in SNNPR region of Ethiopia accommodating around 65000 inhabitants and also as an important trade centre for coffee. Due to the recent developments and urbanization in town and surrounding area, waste and sewage discharge has been raised significantly into the water resources. Also frequent rainfall in the region worsens the problem of water quality. In this view, present study aims to analyze water quality profile of Dilla town using 12 physico-chemical parameters. 15 Sampling stations are identified amongst the open wells, bore wells and from surface water, which are being extensively used for drinking and other domestic purposes. Spectrophotometer is used to analyze data and Gaussian process regression is used to interpolate the same in GIS environment to represent spatial distribution of parameters. Based on observed and desirable values of parameters, water quality index (WQI); an indicator of weighted estimate of the quantities of various parameters ranging from 1 to 100, is developed in GIS. Higher value of WQI indicates better while low value indicates poor water quality. This geospatial analysis is carried out before and after rainfall to understand temporal variation with reference to rainfall which facilitates in identifying the potential zones of drinking water. WQI indicated that 8 out of 15 locations come under acceptable category indicating the suitability of water for human use, however remaining locations are unfit. For example: the water sample at main_campus_ustream_1 (site name) site has very low WQI after rainfall, making it unfit for human usage. This suggests undertaking of certain measures in town to enhance the water quality. These results are useful for town authorities to take corrective measures and ameliorate the water quality for human use.

  20. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  1. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  2. Comparison between different sets of suspension parameters and introduction of new modified skyhook control strategy incorporating varying road condition

    NASA Astrophysics Data System (ADS)

    Abul Kashem, Saad Bin; Ektesabi, Mehran; Nagarajah, Romesh

    2012-07-01

    This study examines the uncertainties in modelling a quarter car suspension system caused by the effect of different sets of suspension parameters of a corresponding mathematical model. To overcome this problem, 11 sets of identified parameters of a suspension system have been compared, taken from the most recent published work. From this investigation, a set of parameters were chosen which showed a better performance than others in respect of peak amplitude and settling time. These chosen parameters were then used to investigate the performance of a new modified continuous skyhook control strategy with adaptive gain that dictates the vehicle's semi-active suspension system. The proposed system first captures the road profile input over a certain period. Then it calculates the best possible value of the skyhook gain (SG) for the subsequent process. Meanwhile the system is controlled according to the new modified skyhook control law using an initial or previous value of the SG. In this study, the proposed suspension system is compared with passive and other recently reported skyhook controlled semi-active suspension systems. Its performances have been evaluated in terms of ride comfort and road handling performance. The model has been validated in accordance with the international standards of admissible acceleration levels ISO2631 and human vibration perception.

  3. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  4. Role of Nuclear Morphometry in Breast Cancer and its Correlation with Cytomorphological Grading of Breast Cancer: A Study of 64 Cases.

    PubMed

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2018-01-01

    Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Descriptive cross-sectional hospital-based study. This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS -Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups.

  5. A systematic review of utility values for chemotherapy-related adverse events.

    PubMed

    Shabaruddin, Fatiha H; Chen, Li-Chia; Elliott, Rachel A; Payne, Katherine

    2013-04-01

    Chemotherapy offers cancer patients the potential benefits of improved mortality and morbidity but may cause detrimental outcomes due to adverse drug events (ADEs), some of which requiring time-consuming, resource-intensive and costly clinical management. To appropriately assess chemotherapy agents in an economic evaluation, ADE-related parameters such as the incidence, (dis)utility and cost of ADEs should be reflected within the model parameters. To date, there has been no systematic summary of the existing literature that quantifies the utilities of ADEs due to healthcare interventions in general and chemotherapy treatments in particular. This review aimed to summarize the current evidence base of reported utility values for chemotherapy-related ADEs. A structured electronic search combining terms for utility, utility valuation methods and generic terms for cancer treatment was conducted in MEDLINE and EMBASE in June 2011. Inclusion criteria were: (1) elicitation of utility values for chemotherapy-related ADEs and (2) primary data. Two reviewers identified studies and extracted data independently. Any disagreements were resolved by a third reviewer. Eighteen studies met the inclusion criteria from the 853 abstracts initially identified, collectively reporting 218 utility values for chemotherapy-related ADEs. All 18 studies used short descriptions (vignettes) to obtain the utility values, with nine studies presenting the vignettes used in the valuation exercises. Of the 218 utility values, 178 were elicited using standard gamble (SG) or time trade-off (TTO) approaches, while 40 were elicited using visual analogue scales (VAS). There were 169 utility values of specific chemotherapy-related ADEs (with the top ten being anaemia [34 values], nausea and/or vomiting [32 values], neuropathy [21 values], neutropenia [12 values], diarrhoea [12 values], stomatitis [10 values], fatigue [8 values], alopecia [7 values], hand-foot syndrome [5 values] and skin reaction [5 values]) and 49 of non-specific chemotherapy-related adverse events. In most cases, it was difficult to directly compare the utility values as various definitions and study-specific vignettes were used for the ADEs of interest. This review was designed to provide an overall description of existing literature reporting utility values for chemotherapy-related ADEs. The findings were not exhaustive and were limited to publications that could be identified using the search strategy employed and those reported in the English language. This review identified wide ranges in the utility values reported for broad categories of specific chemotherapy-related ADEs. There were difficulties in comparing the values directly as various study-specific definitions were used for these ADEs and most studies did not make the vignettes used in the valuation exercises available. It is recommended that a basic minimum requirement be developed for the transparent reporting of study designs eliciting utility values, incorporating key criteria such as reporting how the vignettes were developed and presenting the vignettes used in the valuation tasks as well as valuing and reporting the utility values of the ADE-free base states. It is also recommended, in the future, for studies valuing the utilities of chemotherapy-related ADEs to define the ADEs according to the National Cancer Institute (NCI) definitions for chemotherapy-related ADEs as the use of the same definition across studies would ease the comparison and selection of utility values and make the overall inclusion of adverse events within economic models of chemotherapy agents much more straightforward.

  6. Morphology parameters for intracranial aneurysm rupture risk assessment.

    PubMed

    Dhar, Sujan; Tremmel, Markus; Mocco, J; Kim, Minsuok; Yamamoto, Junichi; Siddiqui, Adnan H; Hopkins, L Nelson; Meng, Hui

    2008-08-01

    The aim of this study is to identify image-based morphological parameters that correlate with human intracranial aneurysm (IA) rupture. For 45 patients with terminal or sidewall saccular IAs (25 unruptured, 20 ruptured), three-dimensional geometries were evaluated for a range of morphological parameters. In addition to five previously studied parameters (aspect ratio, aneurysm size, ellipticity index, nonsphericity index, and undulation index), we defined three novel parameters incorporating the parent vessel geometry (vessel angle, aneurysm [inclination] angle, and [aneurysm-to-vessel] size ratio) and explored their correlation with aneurysm rupture. Parameters were analyzed with a two-tailed independent Student's t test for significance; significant parameters (P < 0.05) were further examined by multivariate logistic regression analysis. Additionally, receiver operating characteristic analyses were performed on each parameter. Statistically significant differences were found between mean values in ruptured and unruptured groups for size ratio, undulation index, nonsphericity index, ellipticity index, aneurysm angle, and aspect ratio. Logistic regression analysis further revealed that size ratio (odds ratio, 1.41; 95% confidence interval, 1.03-1.92) and undulation index (odds ratio, 1.51; 95% confidence interval, 1.08-2.11) had the strongest independent correlation with ruptured IA. From the receiver operating characteristic analysis, size ratio and aneurysm angle had the highest area under the curve values of 0.83 and 0.85, respectively. Size ratio and aneurysm angle are promising new morphological metrics for IA rupture risk assessment. Because these parameters account for vessel geometry, they may bridge the gap between morphological studies and more qualitative location-based studies.

  7. NLC Luminosity as a Function of Beam Parameters

    NASA Astrophysics Data System (ADS)

    Nosochkov, Y.

    2002-06-01

    Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.

  8. Analysis of Family Structures Reveals Robustness or Sensitivity of Bursting Activity to Parameter Variations in a Half-Center Oscillator (HCO) Model.

    PubMed

    Doloc-Mihu, Anca; Calabrese, Ronald L

    2016-01-01

    The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.

  9. Quantitative analysis of ground penetrating radar data in the Mu Us Sandland

    NASA Astrophysics Data System (ADS)

    Fu, Tianyang; Tan, Lihua; Wu, Yongqiu; Wen, Yanglei; Li, Dawei; Duan, Jinlong

    2018-06-01

    Ground penetrating radar (GPR), which can reveal the sedimentary structure and development process of dunes, is widely used to evaluate aeolian landforms. The interpretations for GPR profiles are mostly based on qualitative descriptions of geometric features of the radar reflections. This research quantitatively analyzed the waveform parameter characteristics of different radar units by extracting the amplitude and time interval parameters of GPR data in the Mu Us Sandland in China, and then identified and interpreted different sedimentary structures. The results showed that different types of radar units had specific waveform parameter characteristics. The main waveform parameter characteristics of sand dune radar facies and sandstone radar facies included low amplitudes and wide ranges of time intervals, ranging from 0 to 0.25 and 4 to 33 ns respectively, and the mean amplitudes changed gradually with time intervals. The amplitude distribution curves of various sand dune radar facies were similar as unimodal distributions. The radar surfaces showed high amplitudes with time intervals concentrated in high-value areas, ranging from 0.08 to 0.61 and 9 to 34 ns respectively, and the mean amplitudes changed drastically with time intervals. The amplitude and time interval values of lacustrine radar facies were between that of sand dune radar facies and radar surfaces, ranging from 0.08 to 0.29 and 11 to 30 ns respectively, and the mean amplitude and time interval curve was approximately trapezoidal. The quantitative extraction and analysis of GPR reflections could help distinguish various radar units and provide evidence for identifying sedimentary structure in aeolian landforms.

  10. The prognostic and predictive value of vascular response parameters measured by dynamic contrast-enhanced-CT, -MRI and -US in patients with metastatic renal cell carcinoma receiving sunitinib.

    PubMed

    Hudson, John M; Bailey, Colleen; Atri, Mostafa; Stanisz, Greg; Milot, Laurent; Williams, Ross; Kiss, Alex; Burns, Peter N; Bjarnason, Georg A

    2018-06-01

    To identify dynamic contrast-enhanced (DCE) imaging parameters from MRI, CT and US that are prognostic and predictive in patients with metastatic renal cell cancer (mRCC) receiving sunitinib. Thirty-four patients were monitored by DCE imaging on day 0 and 14 of the first course of sunitinib treatment. Additional scans were performed with DCE-US only (day 7 or 28 and 2 weeks after the treatment break). Perfusion parameters that demonstrated a significant correlation (Spearman p < 0.05) with progression-free survival (PFS) and overall survival (OS) were investigated using Cox proportional hazard models/ratios (HR) and Kaplan-Meier survival analysis. A higher baseline and day 14 value for Ktrans (DCE-MRI) and a lower pre-treatment vascular heterogeneity (DCE-US) were significantly associated with a longer PFS (HR, 0.62, 0.37 and 5.5, respectively). A larger per cent decrease in blood volume on day 14 (DCE-US) predicted a longer OS (HR, 1.45). We did not find significant correlations between any of the DCE-CT parameters and PFS/OS, unless a cut-off analysis was used. DCE-MRI, -CT and ultrasound produce complementary parameters that reflect the prognosis of patients receiving sunitinib for mRCC. Blood volume measured by DCE-US was the only parameter whose change during early anti-angiogenic therapy predicted for OS and PFS. • DCE-CT, -MRI and ultrasound are complementary modalities for monitoring anti-angiogenic therapy. • The change in blood volume measured by DCE-US was predictive of OS/PFS. • Baseline vascular heterogeneity by DCE-US has the strongest prognostic value for PFS.

  11. Uncertainty analyses of the calibrated parameter values of a water quality model

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  12. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  13. Validation of a mathematical model of the bovine estrous cycle for cows with different estrous cycle characteristics.

    PubMed

    Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H

    2017-11-01

    A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.

  14. Linking ecophysiological modelling with quantitative genetics to support marker-assisted crop design for improved yields of rice (Oryza sativa) under drought stress

    PubMed Central

    Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C.

    2014-01-01

    Background and Aims Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Methods Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. Key Results To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait ‘total crop nitrogen uptake’ (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10–36 % more yield than those based on markers for yield per se. Conclusions This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. PMID:24984712

  15. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  16. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  17. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  18. Damage Progression in Buckle-Resistant Notched Composite Plates Loaded in Uniaxial Compression

    NASA Technical Reports Server (NTRS)

    McGowan, David M.; Davila, Carlos G.; Ambur, Damodar R.

    2001-01-01

    Results of an experimental and analytical evaluation of damage progression in three stitched composite plates containing an angled central notch and subjected to compression loading are presented. Parametric studies were conducted systematically to identify the relative effects of the material strength parameters on damage initiation and growth. Comparisons with experiments were conducted to determine the appropriate in situ values of strengths for progressive failure analysis. These parametric studies indicated that the in situ value of the fiber buckling strength is the most important parameter in the prediction of damage initiation and growth in these notched composite plates. Analyses of the damage progression in the notched, compression-loaded plates were conducted using in situ material strengths. Comparisons of results obtained from these analyses with experimental results for displacements and axial strains show good agreement.

  19. Analysis of the methods for assessing socio-economic development level of urban areas

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Bogacheva, Elena

    2017-01-01

    The present paper provides a targeted analysis of current approaches (ratings) in the assessment of socio-economic development of urban areas. The survey focuses on identifying standardized methodologies to area assessment techniques formation that will result in developing the system of intelligent monitoring, dispatching, building management, scheduling and effective management of an administrative-territorial unit. This system is characterized by complex hierarchical structure, including tangible and intangible properties (parameters, attributes). Investigating the abovementioned methods should increase the administrative-territorial unit's attractiveness for investors and residence. The research aims at studying methods for evaluating socio-economic development level of the Russian Federation territories. Experimental and theoretical territory estimating methods were revealed. Complex analysis of the characteristics of the areas was carried out and evaluation parameters were determined. Integral indicators (resulting rating criteria values) as well as the overall rankings (parameters, characteristics) were analyzed. The inventory of the most widely used partial indicators (parameters, characteristics) of urban areas was revealed. The resulting criteria of rating values homogeneity were verified and confirmed by determining the root mean square deviation, i.e. divergence of indices. The principal shortcomings of assessment methodologies were revealed. The assessment methods with enhanced effectiveness and homogeneity were proposed.

  20. An NTCP Analysis of Urethral Complications from Low Doserate Mono- and Bi-Radionuclide Brachytherapy.

    PubMed

    Nuttens, V E; Nahum, A E; Lucas, S

    2011-01-01

    Urethral NTCP has been determined for three prostates implanted with seeds based on (125)I (145 Gy), (103)Pd (125 Gy), (131)Cs (115 Gy), (103)Pd-(125)I (145 Gy), or (103)Pd-(131)Cs (115 Gy or 130 Gy). First, DU(20), meaning that 20% of the urhral volume receive a dose of at least DU(20), is converted into an I-125 LDR equivalent DU(20) in order to use the urethral NTCP model. Second, the propagation of uncertainties through the steps in the NTCP calculation was assessed in order to identify the parameters responsible for large data uncertainties. Two sets of radiobiological parameters were studied. The NTCP results all fall in the 19%-23% range and are associated with large uncertainties, making the comparison difficult. Depending on the dataset chosen, the ranking of NTCP values among the six seed implants studied changes. Moreover, the large uncertainties on the fitting parameters of the urethral NTCP model result in large uncertainty on the NTCP value. In conclusion, the use of NTCP model for permanent brachytherapy is feasible but it is essential that the uncertainties on the parameters in the model be reduced.

  1. The Early Eocene equable climate problem: can perturbations of climate model parameters identify possible solutions?

    PubMed

    Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J

    2013-10-28

    Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.

  2. Cardiac risk index as a simple geometric indicator to select patients for the heart-sparing radiotherapy of left-sided breast cancer.

    PubMed

    Sung, KiHoon; Choi, Young Eun; Lee, Kyu Chan

    2017-06-01

    This is a dosimetric study to identify a simple geometric indicator to discriminate patients who meet the selection criterion for heart-sparing radiotherapy (RT). The authors proposed a cardiac risk index (CRI), directly measurable from the CT images at the time of scanning. Treatment plans were regenerated using the CT data of 312 consecutive patients with left-sided breast cancer. Dosimetric analysis was performed to estimate the risk of cardiac mortality using cardiac dosimetric parameters, such as the relative heart volumes receiving ≥25 Gy (heart V 25 ). For each CT data set, in-field heart depth (HD) and in-field heart width (HW) were measured to generate the geometric parameters, including maximum HW (HW max ) and maximum HD (HD max ). Seven geometric parameters were evaluated as candidates for CRI. Receiver operating characteristic (ROC) curve analyses were used to examine the overall discriminatory power of the geometric parameters to select high-risk patients (heart V 25  ≥ 10%). Seventy-one high-risk (22.8%) and 241 low-risk patients (77.2%) were identified by dosimetric analysis. The geometric and dosimetric parameters were significantly higher in the high-risk group. Heart V 25 showed the strong positive correlations with all geometric parameters examined (r > 0.8, p < 0.001). The product of HD max and HW max (CRI) revealed the largest area under the curve (AUC) value (0.969) and maintained 100% sensitivity and 88% specificity at the optimal cut-off value of 14.58 cm 2 . Cardiac risk index proposed as a simple geometric indicator to select high-risk patients provides useful guidance for clinicians considering optimal implementation of heart-sparing RT. © 2016 The Royal Australian and New Zealand College of Radiologists.

  3. Relationship of periodontal clinical parameters with bacterial composition in human dental plaque.

    PubMed

    Fujinaka, Hidetake; Takeshita, Toru; Sato, Hirayuki; Yamamoto, Tetsuji; Nakamura, Junji; Hase, Tadashi; Yamashita, Yoshihisa

    2013-06-01

    More than 600 bacterial species have been identified in the oral cavity, but only a limited number of species show a strong association with periodontitis. The purpose of the present study was to provide a comprehensive outline of the microbiota in dental plaque related to periodontal status. Dental plaque from 90 subjects was sampled, and the subjects were clustered based on bacterial composition using the terminal restriction fragment length polymorphism of 16S rRNA genes. Here, we evaluated (1) periodontal clinical parameters between clusters; (2) the correlation of subgingival bacterial composition with supragingival bacterial composition; and (3) the association between bacterial interspecies in dental plaque using a graphical Gaussian model. Cluster 1 (C1) having high prevalence of pathogenic bacteria in subgingival plaque showed increasing values of the parameters. The values of the parameters in Cluster 2a (C2a) having high prevalence of non-pathogenic bacteria were markedly lower than those in C1. A cluster having low prevalence of non-pathogenic bacteria in supragingival plaque showed increasing values of the parameters. The bacterial patterns between subgingival plaque and supragingival plaque were significantly correlated. Chief pathogens, such as Porphyromonas gingivalis, formed a network with other pathogenic species in C1, whereas a network of non-pathogenic species, such as Rothia sp. and Lautropia sp., tended to compete with a network of pathogenic species in C2a. Periodontal status relates to non-pathogenic species as well as to pathogenic species, suggesting that the bacterial interspecies connection affects dental plaque virulence.

  4. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  5. Business model design for a wearable biofeedback system.

    PubMed

    Hidefjäll, Patrik; Titkova, Dina

    2015-01-01

    Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".

  6. Deriving and Constraining 3D CME Kinematic Parameters from Multi-Viewpoint Coronagraph Images

    NASA Astrophysics Data System (ADS)

    Thompson, B. J.; Mei, H. F.; Barnes, D.; Colaninno, R. C.; Kwon, R.; Mays, M. L.; Mierla, M.; Moestl, C.; Richardson, I. G.; Verbeke, C.

    2017-12-01

    Determining the 3D properties of a coronal mass ejection using multi-viewpoint coronagraph observations can be a tremendously complicated process. There are many factors that inhibit the ability to unambiguously identify the speed, direction and shape of a CME. These factors include the need to separate the "true" CME mass from shock-associated brightenings, distinguish between non-radial or deflected trajectories, and identify asymmetric CME structures. Additionally, different measurement methods can produce different results, sometimes with great variations. Part of the reason for the wide range of values that can be reported for a single CME is due to the difficulty in determining the CME's longitude since uncertainty in the angle of the CME relative to the observing image planes results in errors in the speed and topology of the CME. Often the errors quoted in an individual study are remarkably small when compared to the range of values that are reported by different authors for the same CME. For example, two authors may report speeds of 700 +- 50 km/sec and 500+-50 km/sec for the same CME. Clearly a better understanding of the accuracy of CME measurements, and an improved assessment of the limitations of the different methods, would be of benefit. We report on a survey of CME measurements, wherein we compare the values reported by different authors and catalogs. The survey will allow us to establish typical errors for the parameters that are commonly used as inputs for CME propagation models such as ENLIL and EUHFORIA. One way modelers handle inaccuracies in CME parameters is to use an ensemble of CMEs, sampled across ranges of latitude, longitude, speed and width. The CMEs simulated in order to determine the probability of a "direct hit" and, for the cases with a "hit," derive a range of possible arrival times. Our study will provide improved guidelines for generating CME ensembles that more accurately sample across the range of plausible values.

  7. Reservoir Identification: Parameter Characterization or Feature Classification

    NASA Astrophysics Data System (ADS)

    Cao, J.

    2017-12-01

    The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601

  8. Estimation of Handling Qualities Parameters of the Tu-144 Supersonic Transport Aircraft from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Curry, Timothy J.; Batterson, James G. (Technical Monitor)

    2000-01-01

    Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.

  9. Knowledge, transparency, and refutability in groundwater models, an example from the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri

    2013-01-01

    This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.

  10. Concomitant semi-quantitative and visual analysis improves the predictive value on treatment outcome of interim 18F-fluorodeoxyglucose / Positron Emission Tomography in advanced Hodgkin lymphoma.

    PubMed

    Biggi, Alberto; Bergesio, Fabrizio; Chauvie, Stephane; Bianchi, Andrea; Menga, Massimo; Fallanca, Federico; Hutchings, Martin; Gregianin, Michele; Meignan, Michel; Gallamini, Andrea

    2017-07-27

    Qualitative assessment using the Deauville five-point scale (DS) is the gold standard for interim and end-of treatment PET interpretation in lymphoma. In the present study we assessed the reliability and the prognostic value of different semi- quantitative (SQ) parameters in comparison with DS for interim PET (iPET) interpretation in Hodgkin lymphoma (HL). A cohort of 82 out of 260 patients with advanced stage HL enrolled in the International Validation Study (IVS), scored as 3 to 5 by the expert panel was included in the present report. Two nuclear medicine physicians blinded to patient history, clinical data and treatment outcome reviewed independently the iPET using the following parameters: DS, SUVMax, SUVPeak of the most active lesion, QMax (ratio of SUVMax of the lesion to liver SUVMax) and QRes (ratio of SUVPeak of the lesion to liver SUVMean). The optimal sensitivity, specificity, positive and negative predictive value to predict treatment outcome was calculated for all the above parameters with the Receiver Operator Characteristics analysis. The prognostic value of all parameters were similar, the best cut-off value being 4 for DS (Area Under the Curve, AUC, 0.81 CI95%: 0.72-0.90), 3.81 for SUVMax (AUC 0.82 CI95%: 0.73-0.91), 3.20 for SUVPeak (AUC 0.86 CI95%: 0.77-0.94), 1.07 for QMax (AUC 0.84 CI95%: 0.75-0.93) and 1.38 for QRes (AUC 0.84 CI95%: 0.75-0.93). The reproducibility of different parameters was similar as the inter-observer variability measured with Cohen's kappa were 0.93 (95% CI 0.84-1.01) for the DS, 0.88 (0.77-0.98) for SUVMax, 0.82 (0.70-0.95) for SUVPeak, 0.85 (0.74-0.97) for QRes and 0.78 (0.65-0.92) for QMax. Due to the high specificity of SUVPeak (0.87) and to the good sensitivity of DS (0.86), upon the use of both parameters the positive predictive value increased from 0.65 of the DS alone to 0.79. When both parameters were positive in iPET, 3-years Failure-Free Survival (FFS) was significantly lower compared to patients whose iPET was interpreted with qualitative parameters only (DS 4 or 5): 21% vs 35%. On the other hand, the FFS of patients with negative results was not significantly different (88% vs 86%). In this study we demonstrated that, combining semi-quantitative parameters with SUVPeak to a pure qualitative interpretation key with DS, it is possible to increase the positive predictive value of iPET and to identify with higher precision the patients subset with a very dismal prognosis. However, these retrospective findings should be confirmed prospectively in a larger patient cohort.

  11. Differential FDG-PET Uptake Patterns in Uninfected and Infected Central Prosthetic Vascular Grafts.

    PubMed

    Berger, P; Vaartjes, I; Scholtens, A; Moll, F L; De Borst, G J; De Keizer, B; Bots, M L; Blankensteijn, J D

    2015-09-01

    (18)F-fluorodeoxyglucose (FDG) positron emission tomography (PET) scanning has been suggested as a means to detect vascular graft infections. However, little is known about the typical FDG uptake patterns associated with synthetic vascular graft implantation. The aim of the present study was to compare uninfected and infected central vascular grafts in terms of various parameters used to interpret PET images. From 2007 through 2013, patients in whom a FDG-PET scan was performed for any indication after open or endovascular central arterial prosthetic reconstruction were identified. Graft infection was defined as the presence of clinical or biochemical signs of graft infection with positive cultures or based on a combination of clinical, biochemical, and imaging parameters (other than PET scan data). All other grafts were deemed uninfected. PET images were analyzed using maximum systemic uptake value (SUVmax), tissue to background ratio (TBR), visual grading scale (VGS), and focality of FDG uptake (focal or homogenous). Twenty-seven uninfected and 32 infected grafts were identified. Median SUVmax was 3.3 (interquartile range [IQR] 2.0-4.2) for the uninfected grafts and 5.7 for the infected grafts (IQR 2.2-7.8). Mean TBR was 2.0 (IQR 1.4-2.5) and 3.2 (IQR 1.5-3.5), respectively. On VGS, 44% of the uninfected and 72% of the infected grafts were judged as a high probability for infection. Homogenous FDG uptake was noted in 74% of the uninfected and 31% of the infected grafts. Uptake patterns of uninfected and infected grafts showed a large overlap for all parameters. The patterns of FDG uptake for uninfected vascular grafts largely overlap with those of infected vascular grafts. This questions the value of these individual FDG-PET-CT parameters in identifying infected grafts. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  12. The human brain representation of odor identification.

    PubMed

    Kjelvik, Grete; Evensmoen, Hallvard R; Brezova, Veronika; Håberg, Asta K

    2012-07-01

    Odor identification (OI) tests are increasingly used clinically as biomarkers for Alzheimer's disease and schizophrenia. The aim of this study was to directly compare the neuronal correlates to identified odors vs. nonidentified odors. Seventeen females with normal olfactory function underwent a functional magnetic resonance imaging (fMRI) experiment with postscanning assessment of spontaneous uncued OI. An event-related analysis was performed to compare within-subject activity to spontaneously identified vs. nonidentified odors at the whole brain level, and in anatomic and functional regions of interest (ROIs) in the medial temporal lobe (MTL). Parameter estimate values and blood oxygenated level-dependent (BOLD) signal curves for correctly identified and nonidentified odors were derived from functional ROIs in hippocampus, entorhinal, piriform, and orbitofrontal cortices. Number of activated voxels and max parameter estimate values were obtained from anatomic ROIs in the hippocampus and the entorhinal cortex. At the whole brain level the correct OI gave rise to increased activity in the left entorhinal cortex and secondary olfactory structures, including the orbitofrontal cortex. Increased activation was also observed in fusiform, primary visual, and auditory cortices, inferior frontal plus inferior temporal gyri. The anatomic MTL ROI analysis showed increased activation in the left entorhinal cortex, right hippocampus, and posterior parahippocampal gyri in correct OI. In the entorhinal cortex and hippocampus the BOLD signal increased specifically in response to identified odors and decreased for nonidentified odors. In orbitofrontal and piriform cortices both identified and nonidentified odors gave rise to an increased BOLD signal, but the response to identified odors was significantly greater than that for nonidentified odors. These results support a specific role for entorhinal cortex and hippocampus in OI, whereas piriform and orbitofrontal cortices are active in both smelling and OI. Moreover, episodic as well as semantic memory systems appeared to support OI.

  13. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  14. Identifiability of PBPK Models with Applications to ...

    EPA Pesticide Factsheets

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  15. Inverse gas chromatographic determination of solubility parameters of excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2005-11-04

    The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.

  16. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  17. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  18. Definition of a simple statistical parameter for the quantification of orientation in two dimensions: application to cells on grooves of nanometric depths.

    PubMed

    Davidson, P; Bigerelle, M; Bounichane, B; Giazzon, M; Anselme, K

    2010-07-01

    Contact guidance is generally evaluated by measuring the orientation angle of cells. However, statistical analyses are rarely performed on these parameters. Here we propose a statistical analysis based on a new parameter sigma, the orientation parameter, defined as the dispersion of the distribution of orientation angles. This parameter can be used to obtain a truncated Gaussian distribution that models the distribution of the data between -90 degrees and +90 degrees. We established a threshold value of the orientation parameter below which the data can be considered to be aligned within a 95% confidence interval. Applying our orientation parameter to cells on grooves and using a modelling approach, we established the relationship sigma=alpha(meas)+(52 degrees -alpha(meas))/(1+C(GDE)R) where the parameter C(GDE) represents the sensitivity of cells to groove depth, and R the groove depth. The values of C(GDE) obtained allowed us to compare the contact guidance of human osteoprogenitor (HOP) cells across experiments involving different groove depths, times in culture and inoculation densities. We demonstrate that HOP cells are able to identify and respond to the presence of grooves 30, 100, 200 and 500 nm deep and that the deeper the grooves, the higher the cell orientation. The evolution of the sensitivity (C(GDE)) with culture time is roughly sigmoidal with an asymptote, which is a function of inoculation density. The sigma parameter defined here is a universal parameter that can be applied to all orientation measurements and does not require a mathematical background or knowledge of directional statistics. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  19. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  20. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    NASA Astrophysics Data System (ADS)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.

  1. Extended Kalman Filter framework for forecasting shoreline evolution

    USGS Publications Warehouse

    Long, Joseph; Plant, Nathaniel G.

    2012-01-01

    A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).

  2. Holographic Lifshitz superconductors: Analytic solution

    NASA Astrophysics Data System (ADS)

    Natsuume, Makoto; Okamura, Takashi

    2018-03-01

    We construct an analytic solution for a one-parameter family of holographic superconductors in asymptotically Lifshitz spacetimes. We utilize this solution to explore various properties of the systems such as (1) the superfluid phase background and the grand canonical potential, (2) the order parameter response function or the susceptibility, (3) the London equation, and (4) the background with a superfluid flow or a magnetic field. From these results, we identify the dual Ginzburg-Landau theory including numerical coefficients. Also, the dynamic critical exponent zD associated with the critical point is given by zD=2 irrespective of the value of the Lifshitz exponent z .

  3. An RBF-PSO based approach for modeling prostate cancer

    NASA Astrophysics Data System (ADS)

    Perracchione, Emma; Stura, Ilaria

    2016-06-01

    Prostate cancer is one of the most common cancers in men; it grows slowly and it could be diagnosed in an early stage by dosing the Prostate Specific Antigen (PSA). However, a relapse after the primary therapy could arise in 25 - 30% of cases and different growth characteristics of the new tumor are observed. In order to get a better understanding of the phenomenon, a two parameters growth model is considered. To estimate the parameters values identifying the disease risk level a novel approach, based on combining Particle Swarm Optimization (PSO) with meshfree interpolation methods, is proposed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sai, P.M.S.; Ahmed, J.; Krishnaiah, K.

    Activated carbon is produced from coconut shell char using steam or carbon dioxide as the reacting gas in a 100 mm diameter fluidized bed reactor. The effect of process parameters such as reaction time, fluidizing velocity, particle size, static bed height, temperature of activation, fluidizing medium, and solid raw material on activation is studied. The product is characterized by determination of iodine number and BET surface area. The product obtained in the fluidized bed reactor is much superior in quality to the activated carbons produced by conventional processes. Based on the experimental observations, the optimum values of process parameters aremore » identified.« less

  5. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  6. Minimization of Defective Products in The Department of Press Bridge & Rib Through Six Sigma DMAIC Phases

    NASA Astrophysics Data System (ADS)

    Rochman, YA; Agustin, A.

    2017-06-01

    This study proposes the DMAIC Six Sigma approach of Define, Measure, Analyze, Improve/Implement and Control (DMAIC) to minimizing the number of defective products in the bridge & rib department. There are 5 types of defects were the most dominant are broken rib, broken sound board, strained rib, rib sliding and sound board minori. The imperative objective is to improve the quality through the DMAIC phases. In the define phase, the critical to quality (CTQ) parameters was identified minimization of product defects through the pareto chart and FMEA. In this phase, to identify waste based on the current value stream mapping. In the measure phase, the specified control limits product used to maintain the variations of the product, the calculation of the value of DPMO (Defect Per Million Opportunities) and the calculation of the value of sigma level. In analyze phase, determine the type of defect of the most dominant and identify the causes of defective products. In the improve phase, the existing design was modified through various alternative solutions by conducting brainstorming sessions. In this phase, the solution was identified based on the results of FMEA. Improvements were made to the seven priority causes of disability based on the highest RPN value. In the control phase, focusing on improvements to be made. Proposed improvements include making and define standard operating procedures, improving the quality and eliminate waste defective products.

  7. Assessment of the Effects of Entrainment and Wind Shear on Nuclear Cloud Rise Modeling

    NASA Astrophysics Data System (ADS)

    Zalewski, Daniel; Jodoin, Vincent

    2001-04-01

    Accurate modeling of nuclear cloud rise is critical in hazard prediction following a nuclear detonation. This thesis recommends improvements to the model currently used by DOD. It considers a single-term versus a three-term entrainment equation, the value of the entrainment and eddy viscous drag parameters, as well as the effect of wind shear in the cloud rise following a nuclear detonation. It examines departures from the 1979 version of the Department of Defense Land Fallout Interpretive Code (DELFIC) with the current code used in the Hazard Prediction and Assessment Capability (HPAC) code version 3.2. The recommendation for a single-term entrainment equation, with constant value parameters, without wind shear corrections, and without cloud oscillations is based on both a statistical analysis using 67 U.S. nuclear atmospheric test shots and the physical representation of the modeling. The statistical analysis optimized the parameter values of interest for four cases: the three-term entrainment equation with wind shear and without wind shear as well as the single-term entrainment equation with and without wind shear. The thesis then examines the effect of cloud oscillations as a significant departure in the code. Modifications to user input atmospheric tables are identified as a potential problem in the calculation of stabilized cloud dimensions in HPAC.

  8. Cable Overheating Risk Warning Method Based on Impedance Parameter Estimation in Distribution Network

    NASA Astrophysics Data System (ADS)

    Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao

    2017-05-01

    Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.

  9. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  10. Laser cutting metallic plates using a 2kW direct diode laser source

    NASA Astrophysics Data System (ADS)

    Fallahi Sichani, E.; Hauschild, D.; Meinschien, J.; Powell, J.; Assunção, E. G.; Blackburn, J.; Khan, A. H.; Kong, C. Y.

    2015-07-01

    This paper investigates the feasibility of using a 2kW direct diode laser source for producing high-quality cuts in a variety of materials. Cutting trials were performed in a two-stage experimental procedure. The first phase of trials was based on a one-factor-at-a-time change of process parameters aimed at exploring the process window and finding a semi-optimum set of parameters for each material/thickness combination. In the second phase, a full factorial experimental matrix was performed for each material and thickness, as a result of which, the optimum cutting parameters were identified. Characteristic values of the optimum cuts were then measured as per BS EN ISO 9013:2002.

  11. Towards a consensus-based biokinetic model for green microalgae - The ASM-A.

    PubMed

    Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy

    2016-10-15

    Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Exemplifying the Effects of Parameterization Shortcomings in the Numerical Simulation of Geological Energy and Mass Storage

    NASA Astrophysics Data System (ADS)

    Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk

    2016-04-01

    Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.

  13. ADC histogram analysis of muscle lymphoma - Correlation with histopathology in a rare entity.

    PubMed

    Meyer, Hans-Jonas; Pazaitis, Nikolaos; Surov, Alexey

    2018-06-21

    Diffusion weighted imaging (DWI) is able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize lesion on MRI. The purpose of this study is to correlate histogram parameters derived from apparent diffusion coefficient- (ADC) maps with histopathology parameters in muscle lymphoma. Eight patients (mean age 64.8 years, range 45-72 years) with histopathologically confirmed muscle lymphoma were retrospectively identified. Cell count, total nucleic and average nucleic areas were estimated using ImageJ. Additionally, Ki67-index was calculated. DWI was obtained on a 1.5T scanner by using the b values of 0 and 1000 s/mm2. Histogram analysis was performed as a whole lesion measurement by using a custom-made Matlabbased application. The correlation analysis revealed statistically significant correlation between cell count and ADCmean (p=-0.76, P=0.03) as well with ADCp75 (p=-0.79, P=0.02). Kurtosis and entropy correlated with average nucleic area (p=-0.81, P=0.02, p=0.88, P=0.007, respectively). None of the analyzed ADC parameters correlated with total nucleic area and with Ki67-index. This study identified significant correlations between cellularity and histogram parameters derived from ADC maps in muscle lymphoma. Thus, histogram analysis parameters reflect histopathology in muscle tumors. Advances in knowledge: Whole lesion ADC histogram analysis is able to reflect histopathology parameters in muscle lymphomas.

  14. Quality index of radiological devices: results of one year of use.

    PubMed

    Tofani, Alessandro; Imbordino, Patrizia; Lecci, Antonio; Bonannini, Claudia; Del Corona, Alberto; Pizzi, Stefano

    2003-01-01

    The physical quality index (QI) of radiological devices summarises in a single numerical value between 0 and 1 the results of constancy tests. The aim of this paper is to illustrate the results of the use of such an index on all public radiological devices in the Livorno province over one year. The quality index was calculated for 82 radiological devices of a wide range of types by implementing its algorithm in a spreadsheet-based software for the automatic handling of quality control data. The distribution of quality index values was computed together with the associated statistical quantities. This distribution is strongly asymmetrical, with a sharp peak near the highest QI values. The mean quality index values for the different types of device show some inhomogeneity: in particular, mammography and panoramic dental radiography devices show far lower quality than other devices. In addition, our analysis has identified the parameters that most frequently do not pass the quality tests for each type of device. Finally, we sought some correlation between quality and age of the device, but this was poorly significant. The quality index proved to be a useful tool providing an overview of the physical conditions of radiological devices. By selecting adequate QI threshold values for, it also helps to decide whether a given device should be upgraded or replaced. The identification of critical parameters for each type of device may be used to improve the definition of the QI by attributing greater weights to critical parameters, so as to better address the maintenance of radiological devices.

  15. Description of the National Hydrologic Model for use with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.

    2018-01-08

    This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.

  16. Characterization of difference of Gaussian filters in the detection of mammographic regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.

    2006-11-15

    In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less

  17. Use of SkinFibrometer® to measure skin elasticity and its correlation with Cutometer® and DUB® Skinscanner.

    PubMed

    Kim, M A; Kim, E J; Lee, H K

    2018-02-06

    Skin elasticity is an important indicator of skin aging. The aim of this study was to demonstrate that the SkinFibrometer ® is appropriate for measuring skin biomechanical properties, and to correlate it with elasticity parameters measured using the Cutometer ® and with dermis structural properties measured using DUB ® Skinscanner. Twenty-one individuals participated in this study. The skin of the cheek, around the eye, and the volar forearm were evaluated. To analyze correlations of elasticity parameters, the induration value against the indenter pressure of SkinFibrometer ® and R, Q parameters of Cutometer ® were compared. Dermal echogenicity using DUB ® Skinscanner was compared with the induration value of SkinFibrometer ® . The younger age group showed more firm and elastic skin properties compared to the older age group, and the elasticity values of the volar forearm were significantly higher than those of the cheek and around the eye region. Even though the measuring principle is different, both SkinFibrometer ® and Cutometer ® demonstrated the same trends of skin elasticity differences according to age and anatomical regions. There were significant correlations between the induration value of SkinFibrometer ® , representing skin firmness, and R0, Q0 and R2, R5, R7, Q1, Q2 of Cutometer ® , which represent skin firmness and resilience, respectively (P < .01). In addition, dermal echogenicity positively correlated with skin firmness determined by SkinFibrometer ® (P < .01). We identified correlations between skin elasticity parameters evaluated by two different methods of suction and indentation, and demonstrated that the SkinFibrometer ® is an objective, non-invasive evaluation tool for skin stiffness and elasticity. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. An evaluation of intraoperative and postoperative outcomes of torsional mode versus longitudinal ultrasound mode phacoemulsification: a Meta-analysis.

    PubMed

    Leon, Pia; Umari, Ingrid; Mangogna, Alessandro; Zanei, Andrea; Tognetto, Daniele

    2016-01-01

    To evaluate and compare the intraoperative parameters and postoperative outcomes of torsional mode and longitudinal mode of phacoemulsification. Pertinent studies were identified by a computerized MEDLINE search from January 2002 to September 2013. The Meta-analysis is composed of two parts. In the first part the intraoperative parameters were considered: ultrasound time (UST) and cumulative dissipated energy (CDE). The intraoperative values were also distinctly considered for two categories (moderate and hard cataract group) depending on the nuclear opacity grade. In the second part of the study the postoperative outcomes as the best corrected visual acuity (BCVA) and the endothelial cell loss (ECL) were taken in consideration. The UST and CDE values proved statistically significant in support of torsional mode for both moderate and hard cataract group. The analysis of BCVA did not present statistically significant difference between the two surgical modalities. The ECL count was statistically significant in support of torsional mode (P<0.001). The Meta-analysis shows the superiority of the torsional mode for intraoperative parameters (UST, CDE) and postoperative ECL outcomes.

  19. An evaluation of intraoperative and postoperative outcomes of torsional mode versus longitudinal ultrasound mode phacoemulsification: a Meta-analysis

    PubMed Central

    Leon, Pia; Umari, Ingrid; Mangogna, Alessandro; Zanei, Andrea; Tognetto, Daniele

    2016-01-01

    AIM To evaluate and compare the intraoperative parameters and postoperative outcomes of torsional mode and longitudinal mode of phacoemulsification. METHODS Pertinent studies were identified by a computerized MEDLINE search from January 2002 to September 2013. The Meta-analysis is composed of two parts. In the first part the intraoperative parameters were considered: ultrasound time (UST) and cumulative dissipated energy (CDE). The intraoperative values were also distinctly considered for two categories (moderate and hard cataract group) depending on the nuclear opacity grade. In the second part of the study the postoperative outcomes as the best corrected visual acuity (BCVA) and the endothelial cell loss (ECL) were taken in consideration. RESULTS The UST and CDE values proved statistically significant in support of torsional mode for both moderate and hard cataract group. The analysis of BCVA did not present statistically significant difference between the two surgical modalities. The ECL count was statistically significant in support of torsional mode (P<0.001). CONCLUSION The Meta-analysis shows the superiority of the torsional mode for intraoperative parameters (UST, CDE) and postoperative ECL outcomes. PMID:27366694

  20. Systematic parameter study of hadron spectra and elliptic flow from viscous hydrodynamic simulations of Au+Au collisions at sNN=200 GeV

    NASA Astrophysics Data System (ADS)

    Shen, Chun; Heinz, Ulrich; Huovinen, Pasi; Song, Huichao

    2010-11-01

    Using the (2+1)-dimensional viscous hydrodynamic code vish2+1 [H. Song and U. Heinz, Phys. Lett. BPYLBAJ0370-269310.1016/j.physletb.2007.11.019 658, 279 (2008); H. Song and U. Heinz, Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.77.064901 77, 064901 (2008); H. Song, Ph. D. thesis, The Ohio State University, 2009], we present systematic studies of the dependence of pion and proton transverse-momentum spectra and their elliptic flow in 200A GeV Au+Au collisions on the parameters of the hydrodynamic model (thermalization time, initial entropy density distribution, decoupling temperature, equation of state, and specific shear viscosity η/s). We identify a tension between the slope of the proton spectra, which (within hydrodynamic simulations that assume a constant shear viscosity to entropy density ratio) prefer larger η/s values, and the slope of the pT dependence of charged hadron elliptic flow, which prefers smaller values of η/s. Changing other model parameters does not appear to permit dissolution of this tension.

  1. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  2. General framework for comparative quantitative studies on transmission of tick-borne diseases using Lyme borreliosis in Europe as an example.

    PubMed

    Randolph, S E; Craine, N G

    1995-11-01

    Models of tick-borne diseases must take account of the particular biological features of ticks that contrast with those of insect vectors. A general framework is proposed that identifies the parameters of the transmission dynamics of tick-borne diseases to allow a quantitative assessment of the relative contributions of different host species and alternative transmission routes to the basic reproductive number, Ro, of such diseases. Taking the particular case of the transmission of the Lyme borreliosis spirochaete, Borrelia burgdorferi, by Ixodes ticks in Europe, and using the best, albeit still inadequate, estimates of the parameter values and a set of empirical data from Thetford Forest, England, we show that squirrels and the transovarial transmission route make quantitatively very significant contributions to Ro. This approach highlights the urgent need for more robust estimates of certain crucial parameter values, particularly the coefficients of transmission between ticks and vertebrates, before we can progress to full models that incorporate seasonality and heterogeneity among host populations for the natural dynamics of transmission of borreliosis and other tick-borne diseases.

  3. Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  4. Systematic parameter study of hadron spectra and elliptic flow from viscous hydrodynamic simulations of Au+Au collisions at {radical}(s{sub NN})=200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Chun; Heinz, Ulrich; Huovinen, Pasi

    2010-11-15

    Using the (2+1)-dimensional viscous hydrodynamic code vish2+1[H. Song and U. Heinz, Phys. Lett. B 658, 279 (2008); H. Song and U. Heinz, Phys. Rev. C 77, 064901 (2008); H. Song, Ph. D. thesis, The Ohio State University, 2009], we present systematic studies of the dependence of pion and proton transverse-momentum spectra and their elliptic flow in 200A GeV Au+Au collisions on the parameters of the hydrodynamic model (thermalization time, initial entropy density distribution, decoupling temperature, equation of state, and specific shear viscosity {eta}/s). We identify a tension between the slope of the proton spectra, which (within hydrodynamic simulations that assumemore » a constant shear viscosity to entropy density ratio) prefer larger {eta}/s values, and the slope of the p{sub T} dependence of charged hadron elliptic flow, which prefers smaller values of {eta}/s. Changing other model parameters does not appear to permit dissolution of this tension.« less

  5. Detection of Operator Performance Breakdown as an Automation Triggering Mechanism

    NASA Technical Reports Server (NTRS)

    Yoo, Hyo-Sang; Lee, Paul U.; Landry, Steven J.

    2015-01-01

    Performance breakdown (PB) has been anecdotally described as a state where the human operator "loses control of context" and "cannot maintain required task performance." Preventing such a decline in performance is critical to assure the safety and reliability of human-integrated systems, and therefore PB could be useful as a point at which automation can be applied to support human performance. However, PB has never been scientifically defined or empirically demonstrated. Moreover, there is no validated objective way of detecting such a state or the transition to that state. The purpose of this work is: 1) to empirically demonstrate a PB state, and 2) to develop an objective way of detecting such a state. This paper defines PB and proposes an objective method for its detection. A human-in-the-loop study was conducted: 1) to demonstrate PB by increasing workload until the subject reported being in a state of PB, and 2) to identify possible parameters of a detection method for objectively identifying the subjectively-reported PB point, and 3) to determine if the parameters are idiosyncratic to an individual/context or are more generally applicable. In the experiment, fifteen participants were asked to manage three concurrent tasks (one primary and two secondary) for 18 minutes. The difficulty of the primary task was manipulated over time to induce PB while the difficulty of the secondary tasks remained static. The participants' task performance data was collected. Three hypotheses were constructed: 1) increasing workload will induce subjectively-identified PB, 2) there exists criteria that identifies the threshold parameters that best matches the subjectively-identified PB point, and 3) the criteria for choosing the threshold parameters is consistent across individuals. The results show that increasing workload can induce subjectively-identified PB, although it might not be generalizable-only 12 out of 15 participants declared PB. The PB detection method based on signal detection analysis was applied to the performance data and the results showed that PB can be identified using the method, particularly when the values of the parameters for the detection method were calibrated individually.

  6. Atherosclerotic plaque delamination: Experiments and 2D finite element model to simulate plaque peeling in two strains of transgenic mice.

    PubMed

    Merei, Bilal; Badel, Pierre; Davis, Lindsey; Sutton, Michael A; Avril, Stéphane; Lessner, Susan M

    2017-03-01

    Finite element analyses using cohesive zone models (CZM) can be used to predict the fracture of atherosclerotic plaques but this requires setting appropriate values of the model parameters. In this study, material parameters of a CZM were identified for the first time on two groups of mice (ApoE -/- and ApoE -/- Col8 -/- ) using the measured force-displacement curves acquired during delamination tests. To this end, a 2D finite-element model of each plaque was solved using an explicit integration scheme. Each constituent of the plaque was modeled with a neo-Hookean strain energy density function and a CZM was used for the interface. The model parameters were calibrated by minimizing the quadratic deviation between the experimental force displacement curves and the model predictions. The elastic parameter of the plaque and the CZM interfacial parameter were successfully identified for a cohort of 11 mice. The results revealed that only the elastic parameter was significantly different between the two groups, ApoE -/- Col8 -/- plaques being less stiff than ApoE -/- plaques. Finally, this study demonstrated that a simple 2D finite element model with cohesive elements can reproduce fairly well the plaque peeling global response. Future work will focus on understanding the main biological determinants of regional and inter-individual variations of the material parameters used in the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  8. Measurement of redox potential in nanoecotoxicological investigations.

    PubMed

    Tantra, Ratna; Cackett, Alex; Peck, Roger; Gohil, Dipak; Snowden, Jacqueline

    2012-01-01

    Redox potential has been identified by the Organisation for Economic Co-operation and Development (OECD) as one of the parameters that should be investigated for the testing of manufactured nanomaterials. There is still some ambiguity concerning this parameter, i.e., as to what and how to measure, particularly when in a nanoecotoxicological context. In this study the redox potentials of six nanomaterials (either zinc oxide (ZnO) or cerium oxide (CeO(2))) dispersions were measured using an oxidation-reduction potential (ORP) electrode probe. The particles under testing differed in terms of their particle size and dispersion stability in deionised water and in various ecotox media. The ORP values of the various dispersions and how they fluctuate relative to each other are discussed. Results show that the ORP values are mainly governed by the type of liquid media employed, with little contributions from the nanoparticles. Seawater was shown to have reduced the ORP value, which was attributed to an increase in the concentration of reducing agents such as sulphites or the reduction of dissolved oxygen concentration. The lack of redox potential value contribution from the particles themselves is thought to be due to insufficient interaction of the particles at the Pt electrode of the ORP probe.

  9. Measurement of Redox Potential in Nanoecotoxicological Investigations

    PubMed Central

    Tantra, Ratna; Cackett, Alex; Peck, Roger; Gohil, Dipak; Snowden, Jacqueline

    2012-01-01

    Redox potential has been identified by the Organisation for Economic Co-operation and Development (OECD) as one of the parameters that should be investigated for the testing of manufactured nanomaterials. There is still some ambiguity concerning this parameter, i.e., as to what and how to measure, particularly when in a nanoecotoxicological context. In this study the redox potentials of six nanomaterials (either zinc oxide (ZnO) or cerium oxide (CeO2)) dispersions were measured using an oxidation-reduction potential (ORP) electrode probe. The particles under testing differed in terms of their particle size and dispersion stability in deionised water and in various ecotox media. The ORP values of the various dispersions and how they fluctuate relative to each other are discussed. Results show that the ORP values are mainly governed by the type of liquid media employed, with little contributions from the nanoparticles. Seawater was shown to have reduced the ORP value, which was attributed to an increase in the concentration of reducing agents such as sulphites or the reduction of dissolved oxygen concentration. The lack of redox potential value contribution from the particles themselves is thought to be due to insufficient interaction of the particles at the Pt electrode of the ORP probe. PMID:22131988

  10. Parameter sensitivity analysis and optimization for a satellite-based evapotranspiration model across multiple sites using Moderate Resolution Imaging Spectroradiometer and flux data

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li

    2017-01-01

    Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.

  11. The value of multi ultra high-b-value DWI in grading cerebral astrocytomas and its association with aquaporin-4.

    PubMed

    Tan, Yan; Zhang, Hui; Wang, Xiao-Chun; Qin, Jiang-Bo; Wang, Le

    2018-06-01

    To investigate the value of multi-ultrahigh-b-value diffusion-weighted imaging (UHBV-DWI) in differentiating high-grade astrocytomas (HGAs) from low-grade astrocytomas (LGAs), analyze its association with aquaporin (AQP) expression. 40 astrocytomas divided into LGAs (N = 15) and HGAs (N = 25) were studied. Apparent diffusion coefficient (ADC) and UHBV-ADC values in solid parts and peritumoral edema were compared between LGAs and HGAs groups by the t-test. Using receiver operating characteristic curves to identify the better parameter. Using real time polymerase chain reaction to assess AQP messenger ribonucleic acid (mRNA). Using spearman correlation analysis to assess the correlation of AQP mRNA with each parameter. ADC values in solid parts of HGAs were significantly lower than LGAs (p = 0.02), while UHBV-ADC values of HGAs were significantly higher than LGAs (p < 0.01). Area under the curve (AUC) of UHBV-ADC (0.810) was larger than ADC (0.713), and the area under the curve of UHBV-ADC was significantly higher than that of ADC (p = 0.041). AQP4 mRNA was significantly higher in HGAs than that in LGAs (p < 0.01); there was less AQP9 mRNA and no AQP1 mRNA in LGAs and HGAs groups (p > 0.05); ADC value showed a negative correlation with AQP4 mRNA (r = -0.357; p = 0.024). UHBV-ADC value positively correlated with the AQP4 mRNA (r = 0.646; p < 0.01). UHBV-DWI allowed for a more accurate grading of cerebral astrocytoma than DWI, and UHBV-ADC value may be related with the AQP4 mRNA levels. UHBV-DWI could be of value in the assessment of astrocytoma. Advances in knowledge: UHBV-DWI generated by multi UHBV could have particular value for astrocytoma grading, and the level of AQP4 mRNA might be potentially linked to the change of UHBV-DWI parameter, and we might find the exact reason for the difference of UHBV-ADC between the LGAs and HGAs.

  12. Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    2003-01-01

    This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.

  13. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    NASA Astrophysics Data System (ADS)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  14. Guidelines for Assessment of Gait and Reference Values for Spatiotemporal Gait Parameters in Older Adults: The Biomathics and Canadian Gait Consortiums Initiative

    PubMed Central

    Beauchet, Olivier; Allali, Gilles; Sekhon, Harmehr; Verghese, Joe; Guilain, Sylvie; Steinmetz, Jean-Paul; Kressig, Reto W.; Barden, John M.; Szturm, Tony; Launay, Cyrille P.; Grenier, Sébastien; Bherer, Louis; Liu-Ambrose, Teresa; Chester, Vicky L.; Callisaya, Michele L.; Srikanth, Velandai; Léonard, Guillaume; De Cock, Anne-Marie; Sawa, Ryuichi; Duque, Gustavo; Camicioli, Richard; Helbostad, Jorunn L.

    2017-01-01

    Background: Gait disorders, a highly prevalent condition in older adults, are associated with several adverse health consequences. Gait analysis allows qualitative and quantitative assessments of gait that improves the understanding of mechanisms of gait disorders and the choice of interventions. This manuscript aims (1) to give consensus guidance for clinical and spatiotemporal gait analysis based on the recorded footfalls in older adults aged 65 years and over, and (2) to provide reference values for spatiotemporal gait parameters based on the recorded footfalls in healthy older adults free of cognitive impairment and multi-morbidities. Methods: International experts working in a network of two different consortiums (i.e., Biomathics and Canadian Gait Consortium) participated in this initiative. First, they identified items of standardized information following the usual procedure of formulation of consensus findings. Second, they merged databases including spatiotemporal gait assessments with GAITRite® system and clinical information from the “Gait, cOgnitiOn & Decline” (GOOD) initiative and the Generation 100 (Gen 100) study. Only healthy—free of cognitive impairment and multi-morbidities (i.e., ≤ 3 therapeutics taken daily)—participants aged 65 and older were selected. Age, sex, body mass index, mean values, and coefficients of variation (CoV) of gait parameters were used for the analyses. Results: Standardized systematic assessment of three categories of items, which were demographics and clinical information, and gait characteristics (clinical and spatiotemporal gait analysis based on the recorded footfalls), were selected for the proposed guidelines. Two complementary sets of items were distinguished: a minimal data set and a full data set. In addition, a total of 954 participants (mean age 72.8 ± 4.8 years, 45.8% women) were recruited to establish the reference values. Performance of spatiotemporal gait parameters based on the recorded footfalls declined with increasing age (mean values and CoV) and demonstrated sex differences (mean values). Conclusions: Based on an international multicenter collaboration, we propose consensus guidelines for gait assessment and spatiotemporal gait analysis based on the recorded footfalls, and reference values for healthy older adults. PMID:28824393

  15. Application of Different Statistical Techniques in Integrated Logistics Support of the International Space Station Alpha

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station.

  16. Mesoscale Polymer Dissolution Probed by Raman Spectroscopy and Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Tsun-Mei; Xantheas, Sotiris S.; Vasdekis, Andreas E.

    2016-10-13

    The diffusion of various solvents into a polystyrene (PS) matrix was probed experimentally by monitoring the temporal profiles of the Raman spectra and theoretically from molecular dynamics (MD) simulations of the binary system. The simulation results assist in providing a fundamental, molecular level connection between the mixing/dissolution processes and the difference = solvent – PS in the values of the Hildebrand parameter () between the two components of the binary systems: solvents having similar values of with PS (small ) exhibit fast diffusion into the polymer matrix, whereas the diffusion slows down considerably when the ’s are different (large ).more » To this end, the Hildebrand parameter was identified as a useful descriptor that governs the process of mixing in polymer – solvent binary systems. The experiments also provide insight into further refinements of the models specific to non-Fickian diffusion phenomena that need to be used in the simulations.« less

  17. Effect of power and type of substrate on calcium-phosphate coating morphology and microhardness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulyashova, Ksenia, E-mail: kseniya@ispms.tsc.ru; Glushko, Yurii, E-mail: glushko@ispms.tsc.ru; Sharkeev, Yurii, E-mail: sharkeev@ispms.tsc.ru

    2015-10-27

    As known, the influence of the different sputtering process parameters and type of substrate on structure of the deposited coating is important to identify, because these parameters are significantly affected on structure of coating. The studies of the morphology and microhardness of calcium-phosphate (CaP) coatings formed and obtained on the surface of titanium, zirconium, titanium and niobium alloy for different values of the power of radio frequency discharge are presented. The increase in the radio frequency (rf) magnetron discharge leads to the formation of a larger grain structure of the coating. The critical depths of indentation for coatings determining themore » value of their microhardness have been estimated. Mechanical properties of the composite material on the basis of the bioinert substrate metal and CaP coatings are superior to the properties of the separate components that make up this composite material.« less

  18. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  19. Determination of malignancy and characterization of hepatic tumor type with diffusion-weighted magnetic resonance imaging: comparison of apparent diffusion coefficient and intravoxel incoherent motion-derived measurements.

    PubMed

    Doblas, Sabrina; Wagner, Mathilde; Leitao, Helena S; Daire, Jean-Luc; Sinkus, Ralph; Vilgrain, Valérie; Van Beers, Bernard E

    2013-10-01

    The objective of this study was to compare the value of the apparent diffusion coefficient (ADC) determined with 3 b values and the intravoxel incoherent motion (IVIM)-derived parameters in the determination of malignancy and characterization of hepatic tumor type. Seventy-six patients with 86 solid hepatic lesions, including 8 hemangiomas, 20 lesions of focal nodular hyperplasia, 9 adenomas, 30 hepatocellular carcinomas, 13 metastases, and 6 cholangiocarcinomas, were assessed in this prospective study. Diffusion-weighted images were acquired with 11 b values to measure the ADCs (with b = 0, 150, and 500 s/mm) and the IVIM-derived parameters, namely, the pure diffusion coefficient and the perfusion-related diffusion fraction and coefficient. The diffusion parameters were compared between benign and malignant tumors and between tumor types, and their diagnostic value in identifying tumor malignancy was assessed. The apparent and pure diffusion coefficients were significantly higher in benign than in malignant tumors (benign: 2.32 [0.87] × 10 mm/s and 1.42 [0.37] × 10 mm/s vs malignant: 1.64 [0.51] × 10 mm/s and 1.14 [0.28] × 10 mm/s, respectively; P < 0.0001 and P = 0.0005), whereas the perfusion-related diffusion parameters did not differ significantly between the 2 groups. The apparent and pure diffusion coefficients provided similar accuracy in assessing tumor malignancy (areas under the receiver operating characteristic curve of 0.770 and 0.723, respectively). In the multigroup analysis, the ADC was found to be significantly higher in hemangiomas than in hepatocellular carcinomas, metastases, and cholangiocarcinomas. In the same manner, it was higher in lesions of focal nodular hyperplasia than in metastases and cholangiocarcinomas. However, the pure diffusion coefficient was significantly higher only in hemangiomas versus hepatocellular and cholangiocellular carcinomas. Compared with the ADC, the diffusion parameters derived from the IVIM model did not improve the determination of malignancy and characterization of hepatic tumor type.

  20. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  1. Preoperative Recipient Parameters Allow Early Estimation of Postoperative Outcome and Intraoperative Transfusion Requirements in Liver Transplantation.

    PubMed

    Schumacher, Carsten; Eismann, Hendrik; Sieg, Lion; Friedrich, Lars; Scheinichen, Dirk; Vondran, Florian W R; Johanning, Kai

    2018-01-01

    Liver transplantation is a complex intervention, and early anticipation of personnel and logistic requirements is of great importance. Early identification of high-risk patients could prove useful. We therefore evaluated prognostic values of recipient parameters commonly available in the early preoperative stage regarding postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. All adult patients undergoing first liver transplantation at Hannover Medical School between January 2005 and December 2010 were included in this retrospective study. Demographic, clinical, and laboratory data as well as clinical courses were recorded. Prognostic values regarding 30- and 90-day outcomes were evaluated by uni- and multivariate statistical tests. Identified risk parameters were used to calculate risk scores. There were 426 patients (40.4% female) included with a mean age of 48.6 (11.9) years. Absolute 30-day mortality rate was 9.9%, and absolute 90-day mortality rate was 13.4%. Preoperative leukocyte count >5200/μL, platelet count <91 000/μL, and creatinine values ≥77 μmol/L were relevant risk factors for both observation periods ( P < .05, respectively). A score based on these factors significantly differentiated between groups of varying postoperative outcomes and intraoperative transfusion requirements ( P < .05, respectively). A score based on preoperative creatinine, leukocyte, and platelet values allowed early estimation of postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. Results might help to improve timely logistic and personal strategies.

  2. Tei index correlates with tissue Doppler parameters and reflects neurohormonal activation in patients with an abnormal transmitral flow pattern.

    PubMed

    Greco, Stefania; Troisi, Federica; Brunetti, Natale Daniele; Di Biase, Matteo

    2009-10-01

    Tei index (TI) is a Doppler parameter which reflects combined systolic and diastolic function. We aimed to study the relationship between TI, both traditional and tissue Doppler imaging (TDI) echocardiographic parameters and neurohormonal profile in outpatients with diastolic dysfunction expressed by an abnormal transmitral flow pattern. A total of 67 consecutive outpatients with diastolic dysfunction (abnormal transmitral flow pattern) were studied; all patients underwent clinical evaluation, blood sampling for B-type natriuretic peptide (BNP) plasma assaying, echocardiography for the determination of left ventricular ejection fraction (LVEF), dP/dt, left atrium (LA) dimensions, longitudinal systolic (S) and diastolic wall velocities (E'and A'), TI measured with Doppler echocardiography, and mitral regurgitation (MR) quantified on a semicontinuous scale. TI values were significantly correlated with BNP levels (r = 0.33; P < 0.01), LVEF (r =-0.56; P < 0.001), dP/dt (r =-0.52; P < 0.01), S (r =-0.45; P < 0.001), E'(r =-0.36; P < 0.01), A'(r =-0.27; P < 0.05), LA volume (r = 0.35; P < 0.01), and MR (P for trend < 0.05). In a multivariate regression analysis, TI was an independent predictor of increased BNP levels (beta= 0.32; P < 0.05), even after correction for potential confounders. ROC analysis showed as values of TI >0.59 identified subjects with combined systolic and diastolic dysfunction with a sensitivity of 73.8% and a specificity of 71.4%. In outpatients with diastolic dysfunction, TI, an easy to perform parameter for global ventricular performance assessment, might be useful in identifying subjects with concomitant systolic impairment and neurohormonal activation.

  3. Pinpointing wastewater and process parameters controlling the AOB to NOB activity ratio in sewage treatment plants.

    PubMed

    Seuntjens, Dries; Han, Mofei; Kerckhof, Frederiek-Maarten; Boon, Nico; Al-Omari, Ahmed; Takacs, Imre; Meerburg, Francis; De Mulder, Chaïm; Wett, Bernhard; Bott, Charles; Murthy, Sudhir; Carvajal Arroyo, Jose Maria; De Clippeleir, Haydée; Vlaeminck, Siegfried E

    2018-07-01

    Even though nitrification/denitrification is a robust technology to remove nitrogen from sewage, economic incentives drive its future replacement by shortcut nitrogen removal processes. The latter necessitates high potential activity ratios of ammonia oxidizing to nitrite oxidizing bacteria (rAOB/rNOB). The goal of this study was to identify which wastewater and process parameters can govern this in reality. Two sewage treatment plants (STP) were chosen based on their inverse rAOB/rNOB values (at 20 °C): 0.6 for Blue Plains (BP, Washington DC, US) and 1.6 for Nieuwveer (NV, Breda, NL). Disproportional and dissimilar relationships between AOB or NOB relative abundances and respective activities pointed towards differences in community and growth/activity limiting parameters. The AOB communities showed to be particularly different. Temperature had no discriminatory effect on the nitrifiers' activities, with similar Arrhenius temperature dependences (Θ AOB  = 1.10, Θ NOB  = 1.06-1.07). To uncouple the temperature effect from potential limitations like inorganic carbon, phosphorus and nitrogen, an add-on mechanistic methodology based on kinetic modelling was developed. Results suggest that BP's AOB activity was limited by the concentration of inorganic carbon (not by residual N and P), while NOB experienced less limitation from this. For NV, the sludge-specific nitrogen loading rate seemed to be the most prevalent factor limiting AOB and NOB activities. Altogether, this study shows that bottom-up mechanistic modelling can identify parameters that influence the nitrification performance. Increasing inorganic carbon in BP could invert its rAOB/rNOB value, facilitating its transition to shortcut nitrogen removal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  5. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    PubMed Central

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  6. Trade-off between disease resistance and crop yield: a landscape-scale mathematical modelling perspective.

    PubMed

    Vyska, Martin; Cunniffe, Nik; Gilligan, Christopher

    2016-10-01

    The deployment of crop varieties that are partially resistant to plant pathogens is an important method of disease control. However, a trade-off may occur between the benefits of planting the resistant variety and a yield penalty, whereby the standard susceptible variety outyields the resistant one in the absence of disease. This presents a dilemma: deploying the resistant variety is advisable only if the disease occurs and is sufficient for the resistant variety to outyield the infected standard variety. Additionally, planting the resistant variety carries with it a further advantage in that the resistant variety reduces the probability of disease invading. Therefore, viewed from the perspective of a grower community, there is likely to be an optimal trade-off and thus an optimal cropping density for the resistant variety. We introduce a simple stochastic, epidemiological model to investigate the trade-off and the consequences for crop yield. Focusing on susceptible-infected-removed epidemic dynamics, we use the final size equation to calculate the surviving host population in order to analyse the yield, an approach suitable for rapid epidemics in agricultural crops. We identify a single compound parameter, which we call the efficacy of resistance and which incorporates the changes in susceptibility, infectivity and durability of the resistant variety. We use the compound parameter to inform policy plots that can be used to identify the optimal strategy for given parameter values when an outbreak is certain. When the outbreak is uncertain, we show that for some parameter values planting the resistant variety is optimal even when it would not be during the outbreak. This is because the resistant variety reduces the probability of an outbreak occurring. © 2016 The Author(s).

  7. Quantifying Effects of Pharmacological Blockers of Cardiac Autonomous Control Using Variability Parameters.

    PubMed

    Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A

    2017-01-01

    Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.

  8. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  9. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  10. Hyperpolarized Xenon-129 Gas-Exchange Imaging of Lung Microstructure: First Case Studies in Subjects with Obstructive Lung Disease

    PubMed Central

    Dregely, Isabel; Mugler, John P.; Ruset, Iulian C.; Altes, Talissa A.; Mata, Jaime F.; Miller, G. Wilson; Ketel, Jeffrey; Ketel, Steve; Distelbrink, Jan; Hersman, F.W.; Ruppert, Kai

    2011-01-01

    Purpose To develop and test a method to non-invasively assess the functional lung microstructure. Materials and Methods The Multiple exchange time Xenon polarization Transfer Contrast technique (MXTC) encodes xenon gas-exchange contrast at multiple delay times permitting two lung-function parameters to be derived: 1) MXTC-F, the long exchange-time depolarization value, which is proportional to the tissue to alveolar-volume ratio and 2) MXTC-S, the square root of the xenon exchange-time constant, which characterizes thickness and composition of alveolar septa. Three healthy volunteers, one asthmatic and two COPD (GOLD stage I and II) subjects were imaged with MXTC MRI. In a subset of subjects, hyperpolarized xenon-129 ADC MRI and CT imaging were also performed. Results The MXTC-S parameter was found to be elevated in subjects with lung disease (p-value = 0.018). In the MXTC-F parameter map it was feasible to identify regional loss of functional tissue in a COPD patient. Further, the MXTC-F map showed excellent regional correlation with CT and ADC (ρ ≥ 0.90) in one COPD subject. Conclusion The functional tissue-density parameter MXTC-F showed regional agreement with other imaging techniques. The newly developed parameter MXTC-S, which characterizes the functional thickness of alveolar septa, has potential as a novel biomarker for regional parenchymal inflammation or thickening. PMID:21509861

  11. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models

    USGS Publications Warehouse

    Alley, William M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation. Some problems in parameter identification are noted, including difficulties in identifying an appropriate time lag factor for the Thornthwaite-Mather-type model for basins with little groundwater storage, very high correlations between upper and lower storages in the Palmer-type model, and large sensitivity of parameter a of the abcd model to bias in estimates of precipitation and potential evapotranspiration. Modifications to the threshold concept of the Thornthwaite-Mather model were statistically valid for the six stations in northern New Jersey. The abcd model resulted in a simulated seasonal cycle of groundwater levels similar to fluctuations observed in nearby wells but with greater persistence. These results suggest that extreme caution should be used in attaching physical significance to model parameters and in using the state variables of the models in indices of drought and basin productivity.

  12. Role of Nuclear Morphometry in Breast Cancer and its Correlation with Cytomorphological Grading of Breast Cancer: A Study of 64 Cases

    PubMed Central

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2018-01-01

    Background: Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. Aims: To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Settings and Designs: Descriptive cross-sectional hospital-based study. Materials and Methods: This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS –Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Results: Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Conclusion: Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups. PMID:29403169

  13. An economic evaluation of maxillary implant overdentures based on six vs. four implants.

    PubMed

    Listl, Stefan; Fischer, Leonhard; Giannakopoulos, Nikolaos Nikitas

    2014-08-18

    The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients' denture satisfaction, the respective cost-effectiveness threshold varies substantially. The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes.

  14. Brain invasion assessability in meningiomas is related to meningioma size and grade, and can be improved by extensive sampling of the surgically removed meningioma specimen.

    PubMed

    Pizem, Joze; Velnar, Tomaz; Prestor, Borut; Mlakar, Jernej; Popovic, Mara

    2014-01-01

    Despite the important prognostic value of brain invasion in meningiomas, little attention has been paid to its massessment, and the parameters associated with brain invasion assessability (identification of brain tissue in the surgical specimen) are not well characterized. The aim of our study was to determine the parameters that are associated with brain invasion assessability and brain invasion in meningiomas. By binary logistic regression analysis, we studied the association of various clinical and pathologic parameters with brain invasion assessabilitym and brain invasion in 294 meningiomas: 149 unselected consecutive meningiomas with extensive sampling, diagnosed in 2009 and 2010, collected prospectively, and 145 meningiomas diagnosed in 1999 and 2000 when little attention was paid to brain invasion assessment. Meningioma grade, size and number of tissue blocks were independent predictors of brain invasion assessability. Brain tissue was identified in 78 of 233 (33%) benign, 33 of 51 (65%) atypical, and 10 of 10 (100%) malignant meningiomas. In univariate analysis, group (prospective vs.retrospective), type (recurrent vs. primary), cleavability, meningioma grade and mitotic count were predictors of brain invasion, while only meningioma grade, and group retained predictive value in multivariate analysis. Brain invasion, when assessable, was identified in 22 of 78 (28%) benign, 21 of 33 (64%) atypical, and 10 of 10 (100%) malignant meningiomas. Brain invasion assessability is related to meningioma grade and size and can be improved by extensive sampling of meningioma surgical.

  15. An economic evaluation of maxillary implant overdentures based on six vs. four implants

    PubMed Central

    2014-01-01

    Background The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. Methods A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Results Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients’ denture satisfaction, the respective cost-effectiveness threshold varies substantially. Conclusions The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes. PMID:25135370

  16. The effect of signal acquisition and processing choices on ApEn values: towards a "gold standard" for distinguishing effort levels from isometric force records.

    PubMed

    Forrest, Sarah M; Challis, John H; Winter, Samantha L

    2014-06-01

    Approximate entropy (ApEn) is frequently used to identify changes in the complexity of isometric force records with ageing and disease. Different signal acquisition and processing parameters have been used, making comparison or confirmation of results difficult. This study determined the effect of sampling and parameter choices by examining changes in ApEn values across a range of submaximal isometric contractions of the first dorsal interosseus. Reducing the sample rate by decimation changed both the value and pattern of ApEn values dramatically. The pattern of ApEn values across the range of effort levels was not sensitive to the filter cut-off frequency, or the criterion used to extract the section of data for analysis. The complexity increased with increasing effort levels using a fixed 'r' value (which accounts for measurement noise) but decreased with increasing effort level when 'r' was set to 0.1 of the standard deviation of force. It is recommended isometric force records are sampled at frequencies >200Hz, template length ('m') is set to 2, and 'r' set to measurement system noise or 0.1SD depending on physiological process to be distinguished. It is demonstrated that changes in ApEn across effort levels are related to changes in force gradation strategy. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    PubMed Central

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  18. Relations between Municipal Water Use and Selected Meteorological Parameters and Drought Indices, East-Central and Northeast Florida

    USGS Publications Warehouse

    Murray, Louis C.

    2009-01-01

    Water-use data collected between 1992 and 2006 at eight municipal water-supply utilities in east-central and northeast Florida were analyzed to identify seasonal trends in use and to quantify monthly variations. Regression analyses were applied to identify significant correlations between water use and selected meteorological parameters and drought indices. Selected parameters and indices include precipitation (P), air temperature (T), potential evapotranspiration (PET), available water (P-PET), monthly changes in these parameters (Delta P, Delta T, Delta PET, Delta(P-PET), the Palmer Drought Severity Index (PDSI), and the Standardized Precipitation Index (SPI). Selected utilities include the City of Daytona Beach (Daytona), the City of Eustis (Eustis), Gainesville Regional Utilities (GRU), Jacksonville Electric Authority (JEA), Orange County Utilities (OCU), Orlando Utilities Commission (OUC), Seminole County Utilities (SCU), and the City of St. Augustine (St. Augustine). Water-use rates at these utilities in 2006 ranged from about 3.2 million gallons per day at Eustis to about 131 million gallons per day at JEA. Total water-use rates increased at all utilities throughout the 15-year period of record, ranging from about 4 percent at Daytona to greater than 200 percent at OCU and SCU. Metered rates, however, decreased at six of the eight utilities, ranging from about 2 percent at OCU and OUC to about 17 percent at Eustis. Decreases in metered rates occurred because the number of metered connections increased at a greater rate than did total water use, suggesting that factors other than just population growth may play important roles in water-use dynamics. Given the absence of a concurrent trend in precipitation, these decreases can likely be attributed to changes in non-climatic factors such as water-use type, usage of reclaimed water, water-use restrictions, demographics, and so forth. When averaged for the eight utilities, metered water-use rates depict a clear seasonal pattern in which rates were lowest in the winter and greatest in the late spring. Averaged water-use rates ranged from about 9 percent below the 15-year daily mean in January to about 11 percent above the daily mean in May. Water-use rates were found to be statistically correlated to meteorological parameters and drought indices, and to be influenced by system memory. Metered rates (in gallons per day per active metered connection) were consistently found to be influenced by P, T, PET, and P-PET and changes in these parameters that occurred in prior months. In the single-variant analyses, best correlations were obtained by fitting polynomial functions to plots of metered rates versus moving-averaged values of selected parameters (R2 values greater than 0.50 at three of eight sites). Overall, metered water-use rates were best correlated with the 3- to 4-month moving average of Delta T or Delta PET (R2 values up to 0.66), whereas the full suite of meteorological parameters was best correlated with metered rates at Daytona and least correlated with rates at St. Augustine. Similarly, metered rates were substantially better correlated with moving-averaged values of precipitation (significant at all eight sites) than with single (current) monthly values (significant at only three sites). Total and metered water-use rates were positively correlated with T, PET, Delta P, Delta T, and Delta PET, and negatively correlated with P, P-PET, Delta (P-PET), PDSI, and SPI. The drought indices were better correlated with total water-use rates than with metered rates, whereas metered rates were better correlated with meteorological parameters. Multivariant analyses produced fits of the data that explained a greater degree of the variance in metered rates than did the single-variant analyses. Adjusted R2 values for the 'best' models ranged from 0.79 at JEA to 0.29 at St. Augustine and exceeded 0.60 at five of eight sites. The amount of available water (P-PET) was the si

  19. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  20. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  1. The use of infrared thermal imaging as a non-destructive screening tool for identifying drought-tolerant lentil genotypes.

    PubMed

    Biju, Sajitha; Fuentes, Sigfredo; Gupta, Dorin

    2018-06-01

    Lentil (Lens culinaris, Medik.) is an important legume crop, which often experience drought stress especially at the flowering and grain filling phenological stages. The availability of efficient and robust screening tools based on relevant non-destructive quantifiable traits would facilitate research on crop improvement for drought tolerance. The objective of this study was to evaluate the drought tolerance of 37 lentil genotypes using infrared thermal imaging (IRTI), drought tolerance parameters and multivariate data analysis. Potted plants were kept in a completely randomized design in a growth chamber with five replicates. Plants were subjected to three different drought treatments: 100, 50 and 20% of field capacity at the onset of reproductive period. The relative drought stress tolerance was determined based on a set of morpho-physiological parameters including non-destructive measures based on IRTI, such as: canopy temperature (Tc), canopy temperature depression (CTD) and crop water stress index (CWSI) during the growing period and destructive measures at harvest, such as: dry root-shoot ratio (RS ratio), relative water content (RWC) and harvest index (HI). The drought tolerance indices used were drought susceptibility index (DSI) and drought tolerance efficiency (DTE). Results showed that drought stress treatments significantly reduced the RWC, HI, CTD and DSI, whereas, the values of Tc, CWSI, RS ratio and DTE significantly increased for all the genotypes. The cluster analysis from morpho-physiological parameters clustered genotypes in three distinctive groups as per the level of drought stress tolerance. The genotypes with higher values of RS ratio, RWC, HI, DTE and CTD and lower values of DSI, Tc and CWSI were identified as drought-tolerant genotypes. Based on this preliminary screening, the genotypes Digger, Cumra, Indianhead, ILL 5588, ILL 6002 and ILL 5582 were identified as promising drought-tolerant genotypes. It can be concluded that the IRTI analysis is a high-throughput constructive screening tool along with RS ratio, RWC, HI and other drought tolerance indices to define the drought stress tolerance variability within lentil plants. These results provide a foundation for future research directed at identifying powerful drought assessment traits using rapid and non-destructive techniques, such as IRTI along with the yield traits, and understanding the biochemical and molecular mechanisms underlying lentil tolerance to drought stress. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. Monitoring physical and chemical parameters of Delaware Bay waters with an ERTS-1 data collection platform

    NASA Technical Reports Server (NTRS)

    Klemas, V. (Principal Investigator); Wethe, C.

    1975-01-01

    The author has identified the following significant results. Results of the analysis of data collected during the summer of 1974 demonstrate that the ERTS Data Collection Platform (DCP) is quite responsive to changing water parameters and that this information can be successfully transmitted under all weather conditions. The monitoring of on-site probe outputs reveals a rapid response to changing water temperature, salinity, and turbidity conditions on incoming tides as the tidal salt wedge passes the probe location. The changes in water properties were corroborated by simultaneously sampling the water for subsequent laboratory analysis. Fluctuations observed in the values of salinity, conductivity, temperature and water depth over short time intervals were extremely small. Due to the nature of the probe, 10% to 20% fluctuations were observed in the turbidity values. The use of the average of the values observed during an overpass provided acceptable results. Good quality data was obtained from the satellite on each overpass regardless of weather conditions. Continued use of the DCP will help provide an indication of the accuracy of the probes and transmission system during long term use.

  3. Verification of folk medicinal potentiality for some common plants in Jordan.

    PubMed

    Al-Qura'n, S

    2006-10-01

    87 Species belonging to 59 genera and 33 plant families were identified and presented in the area of study. The largest 3 families are: Lamiaceae (9 aquatic species), Asteraceae (7 species), and Salicaceae (7 species). The largest genera are Mentha (6 species), Polygonum (5 species), and Salix (5 species). 63 folk medicinal aquatic species (73.3%) have therapeutic similarities with neighbouring countries, while the 24 remaining species (26.7%) haven't such therapeutic similarity. Emerged species (living with close contact with water body) were the most recorded, while amphibious, submerged or floating species were the least. The folk medicinal importance value of aquatic species recorded was identified according to Friedman parameters. 21 species (24%) have ROP values higher than 50, and therefore; have the highest popularity in folk medicinal potentiality. 26 species (29.9%) have therapeutic effects informed by less than three informants, and therefore, excluded from further consideration. 40 species (46.1%) have ROP values less than 50, and therefore; considered nonpopular medicinal plants.

  4. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  5. A National Trial on Differences in Cerebral Perfusion Pressure Values by Measurement Location.

    PubMed

    McNett, Molly M; Bader, Mary Kay; Livesay, Sarah; Yeager, Susan; Moran, Cristina; Barnes, Arianna; Harrison, Kimberly R; Olson, DaiWai M

    2018-04-01

    Cerebral perfusion pressure (CPP) is a key parameter in management of brain injury with suspected impaired cerebral autoregulation. CPP is calculated by subtracting intracranial pressure (ICP) from mean arterial pressure (MAP). Despite consensus on importance of CPP monitoring, substantial variations exist on anatomical reference points used to measure arterial MAP when calculating CPP. This study aimed to identify differences in CPP values based on measurement location when using phlebostatic axis (PA) or tragus (Tg) as anatomical reference points. The secondary study aim was to determine impact of differences on patient outcomes at discharge. This was a prospective, repeated measures, multi-site national trial. Adult ICU patients with neurological injury necessitating ICP and CPP monitoring were consecutively enrolled from seven sites. Daily MAP/ICP/CPP values were gathered with the arterial transducer at the PA, followed by the Tg as anatomical reference points. A total of 136 subjects were enrolled, resulting in 324 paired observations. There were significant differences for CPP when comparing values obtained at PA and Tg reference points (p < 0.000). Differences remained significant in repeated measures model when controlling for clinical factors (mean CPP-PA = 80.77, mean CPP-Tg = 70.61, p < 0.000). When categorizing CPP as binary endpoint, 18.8% of values were identified as adequate with PA values, yet inadequate with CPP values measured at the Tg. Findings identify numerical differences for CPP based on anatomical reference location and highlight importance of a standard reference point for both clinical practice and future trials to limit practice variations and heterogeneity of findings.

  6. Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2012-01-01

    An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.

  7. Aspects of metallic low-temperature transport in Mott-insulator/band-insulator superlattices: Optical conductivity and thermoelectricity

    NASA Astrophysics Data System (ADS)

    Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred

    2008-06-01

    We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.

  8. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  9. Sensitivity Analysis of Kinetic Rate-Law Parameters Used to Simulate Long-Term Weathering of ILAW Glass. Erratum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Gary L.

    2016-09-06

    This report refers to or contains K g values for glasses LAWA44, LAWB45 and LAWC22 affected by calculations errors as identified by Papathanassiu et al. (2011). The corrected K g values are reported in an erratum included in the revised version of the original report. The revised report can be referenced as follows: Pierce E. M. et al. (2004) Waste Form Release Data Package for the 2005 Integrated Disposal Facility Performance Assessment. PNNL-14805 Rev. 0 Erratum. Pacific Northwest National Laboratory, Richland, WA, USA.

  10. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.

  11. Identifying and assessing critical uncertainty thresholds in a forest pest risk model

    Treesearch

    Frank H. Koch; Denys Yemshanov

    2015-01-01

    Pest risk maps can provide helpful decision support for invasive alien species management, but often fail to address adequately the uncertainty associated with their predicted risk values. Th is chapter explores how increased uncertainty in a risk model’s numeric assumptions (i.e. its principal parameters) might aff ect the resulting risk map. We used a spatial...

  12. Versatile Analysis of Single-Molecule Tracking Data by Comprehensive Testing against Monte Carlo Simulations

    PubMed Central

    Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.

    2008-01-01

    We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933

  13. Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study

    NASA Astrophysics Data System (ADS)

    Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2013-04-01

    The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be

  14. Report of the Nuclear Propulsion Mission Analysis, Figures of Merit Subpanel: Quantifiable figures of merit for nuclear thermal propulsion

    NASA Technical Reports Server (NTRS)

    Haynes, Davy A.

    1991-01-01

    The results of an inquiry by the Nuclear Propulsion Mission Analysis, Figures of Merit subpanel are given. The subpanel was tasked to consider the question of what are the appropriate and quantifiable parameters to be used in the definition of an overall figure of merit (FoM) for Mars transportation system (MTS) nuclear thermal rocket engines (NTR). Such a characterization is needed to resolve the NTR engine design trades by a logical and orderly means, and to provide a meaningful method for comparison of the various NTR engine concepts. The subpanel was specifically tasked to identify the quantifiable engine parameters which would be the most significant engine factors affecting an overall FoM for a MTS and was not tasked with determining 'acceptable' or 'recommended' values for the identified parameters. In addition, the subpanel was asked not to define an overall FoM for a MTS. Thus, the selection of a specific approach, applicable weighting factors, to any interrelationships, for establishing an overall numerical FoM were considered beyond the scope of the subpanel inquiry.

  15. Taming parallel I/O complexity with auto-tuning

    DOE PAGES

    Behzad, Babak; Luu, Huong Vu Thanh; Huchette, Joseph; ...

    2013-11-17

    We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, andmore » 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. In conclusion, we consistently demonstrate I/O write speedups between 2x and 100x for test configurations.« less

  16. Identification and evaluation of air-pollution-tolerant plants around lignite-based thermal power station for greenbelt development.

    PubMed

    Govindaraju, M; Ganeshkumar, R S; Muthukumaran, V R; Visvanathan, P

    2012-05-01

    Thermal power plants emit various gaseous and particulate pollutants into the atmosphere. It is well known that trees help to reduce air pollution. Development of a greenbelt with suitable plant species around the source of emission will mitigate the air pollution. Selection of suitable plant species for a greenbelt is very important. Present study evaluates different plant species around Neyveli thermal power plant by calculating the Air Pollution Tolerance Index (APTI) which is based on their significant biochemical parameters. Also Anticipated Performance Index (API) was calculated for these plant species by combining APTI values with other socio-economic and biological parameters. Based on these indices, the most appropriate plant species were identified for the development of a greenbelt around the thermal power plant to mitigate air pollution. Among the 30 different plant species evaluated, Mangifere indica L. was identified as keystone species which is coming under the excellent category. Ambient air quality parameters were correlated with the biochemical characteristics of plant leaves and significant changes were observed in the plants biochemical characteristics due to the air pollution stress.

  17. Bflinks: Reliable Bugfix Links via Bidirectional References and Tuned Heuristics

    PubMed Central

    2014-01-01

    Background. Data from software version archives and defect databases can be used for defect insertion circumstance analysis and defect prediction. The first step in such analyses is identifying defect-correcting changes in the version archive (bugfix commits) and enriching them with additional metadata by establishing bugfix links to corresponding entries in the defect database. Candidate bugfix commits are typically identified via heuristic string matching on the commit message. Research Questions. Which filters could be used to obtain a set of bugfix links? How to tune their parameters? What accuracy is achieved? Method. We analyze a modular set of seven independent filters, including new ones that make use of reverse links, and evaluate visual heuristics for setting cutoff parameters. For a commercial repository, a product expert manually verifies over 2500 links to validate the results with unprecedented accuracy. Results. The heuristics pick a very good parameter value for five filters and a reasonably good one for the sixth. The combined filtering, called bflinks, provides 93% precision and only 7% results loss. Conclusion. Bflinks can provide high-quality results and adapts to repositories with different properties. PMID:27433506

  18. Histogram analysis derived from apparent diffusion coefficient (ADC) is more sensitive to reflect serological parameters in myositis than conventional ADC analysis.

    PubMed

    Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey

    2018-05-01

    Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.

  19. Correlation of chlorophyll, suspended matter, and related parameters of waters in the lower Chesapeake Bay area to LANDSAT-1 imagery

    NASA Technical Reports Server (NTRS)

    Fleischer, P. (Principal Investigator); Bowker, D. E.; Witte, W. G.; Gosink, T. A.; Hanna, W. J.; Ludwick, J. C.

    1976-01-01

    The author has identified the following significant results. An effort to relate water parameters of the lower Chesapeake Bay area to multispectral scanner images of LANDSAT 1 has shown that some spectral bands can be correlated to water parameters, and has demonstrated the feasibility of synoptic mapping of estuaries by satellite. Bands 5 and 6 were shown to be useful for monitoring total particles. Band 5 showed high correlation with suspended sediment concentration. Attenuation coefficients monitored continuously by ship along three baselines were cross correlated with radiance values on three days. Improved correlations resulted when tidal conditions were taken into consideration. A contouring program was developed to display sediment variation in the lower Chesapeake Bay from the MSS bands.

  20. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  1. Classification of hepatocellular carcinoma stages from free-text clinical and radiology reports

    PubMed Central

    Yim, Wen-wai; Kwan, Sharon W; Johnson, Guy; Yetisgen, Meliha

    2017-01-01

    Cancer stage information is important for clinical research. However, they are not always explicitly noted in electronic medical records. In this paper, we present our work on automatic classification of hepatocellular carcinoma (HCC) stages from free-text clinical and radiology notes. To accomplish this, we defined 11 stage parameters used in the three HCC staging systems, American Joint Committee on Cancer (AJCC), Barcelona Clinic Liver Cancer (BCLC), and Cancer of the Liver Italian Program (CLIP). After aggregating stage parameters to the patient-level, the final stage classifications were achieved using an expert-created decision logic. Each stage parameter relevant for staging was extracted using several classification methods, e.g. sentence classification and automatic information structuring, to identify and normalize text as cancer stage parameter values. Stage parameter extraction for the test set performed at 0.81 F1. Cancer stage prediction for AJCC, BCLC, and CLIP stage classifications were 0.55, 0.50, and 0.43 F1.

  2. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  3. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  4. Quantitative trait loci and candidate genes associated with starch pasting viscosity characteristics in cassava (Manihot esculenta Crantz).

    PubMed

    Thanyasiriwat, T; Sraphet, S; Whankaew, S; Boonseng, O; Bao, J; Lightfoot, D A; Tangphatsornruang, S; Triwitayakorn, K

    2014-01-01

    Starch pasting viscosity is an important quality trait in cassava (Manihot esculenta Crantz) cultivars. The aim here was to identify loci and candidate genes associated with the starch pasting viscosity. Quantitative trait loci (QTL) mapping for seven pasting viscosity parameters was carried out using 100 lines of an F1 mapping population from a cross between two cassava cultivars Huay Bong 60 and Hanatee. Starch samples were obtained from roots of cassava grown in 2008 and 2009 at Rayong, and in 2009 at Lop Buri province, Thailand. The traits showed continuous distribution among the F1 progeny with transgressive variation. Fifteen QTL were identified from mean trait data, with Logarithm of Odds (LOD) values from 2.77-13.01 and phenotype variations explained (PVE) from10.0-48.4%. In addition, 48 QTL were identified in separate environments. The LOD values ranged from 2.55-8.68 and explained 6.6-43.7% of phenotype variation. The loci were located on 19 linkage groups. The most important QTL for pasting temperature (PT) (qPT.1LG1) from mean trait values showed largest effect with highest LOD value (13.01) and PVE (48.4%). The QTL co-localised with PT and pasting time (PTi) loci that were identified in separate environments. Candidate genes were identified within the QTL peak regions. However, the major genes of interest, encoding the family of glycosyl or glucosyl transferases and hydrolases, were located at the periphery of QTL peaks. The loci identified could be effectively applied in breeding programmes to improve cassava starch quality. Alleles of candidate genes should be further studied in order to better understand their effects on starch quality traits. © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.

  5. Hero's journey in bifurcation diagram

    NASA Astrophysics Data System (ADS)

    Monteiro, L. H. A.; Mustaro, P. N.

    2012-06-01

    The hero's journey is a narrative structure identified by several authors in comparative studies on folklore and mythology. This storytelling template presents the stages of inner metamorphosis undergone by the protagonist after being called to an adventure. In a simplified version, this journey is divided into three acts separated by two crucial moments. Here we propose a discrete-time dynamical system for representing the protagonist's evolution. The suffering along the journey is taken as the control parameter of this system. The bifurcation diagram exhibits stationary, periodic and chaotic behaviors. In this diagram, there are transition from fixed point to chaos and transition from limit cycle to fixed point. We found that the values of the control parameter corresponding to these two transitions are in quantitative agreement with the two critical moments of the three-act hero's journey identified in 10 movies appearing in the list of the 200 worldwide highest-grossing films.

  6. Identification of atmospheric boundary layer thickness using doppler radar datas and WRF - ARW model in Merauke

    NASA Astrophysics Data System (ADS)

    Putri, R. J. A.; Setyawan, T.

    2017-01-01

    In the synoptic scale, one of the important meteorological parameter is the atmospheric boundary layer. Aside from being a supporter of the parameters in weather and climate models, knowing the thickness of the layer of the atmosphere can help identify aerosols and the strength of the vertical mixing of pollutants in it. The vertical wind profile data from C-band Doppler radar Mopah-Merauke which is operated by BMKG through Mopah-Merauke Meteorological Station can be used to identify the peak of Atmospheric Boundaryu Layer (ABL). ABL peak marked by increasing wind shear over the layer blending. Samples in January 2015 as a representative in the wet and in July 2015 as the representation of a dry month, shows that ABL heights using WRF models show that in July (sunny weather) ABL height values higher than in January (cloudy)

  7. Natural parameter values for generalized gene adjacency.

    PubMed

    Yang, Zhenyu; Sankoff, David

    2010-09-01

    Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.

  8. Rotor design for maneuver performance

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Schrage, Daniel

    1986-01-01

    A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.

  9. Exploring the hyperchargeless Higgs triplet model up to the Planck scale

    NASA Astrophysics Data System (ADS)

    Khan, Najimuddin

    2018-04-01

    We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the discovery of a Higgs-like particle at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering matrix. Considering the cases with and without Z_2-symmetry of the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation.

  10. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed elevation Z0is systematically well identified with relative errors on the order of a few %. Eventually, these altimetry-based rating curves provide morphological parameters of river reaches that can be used as inputs into hydraulic models and a priori information that could be useful for SWOT inversion algorithms.

  11. T wave amplitude in lead aVR as a novel diagnostic marker for cardiac sarcoidosis.

    PubMed

    Tanaka, Yoshihiro; Konno, Tetsuo; Yoshida, Shohei; Tsuda, Toyonobu; Sakata, Kenji; Furusho, Hiroshi; Takamura, Masayuki; Yoshimura, Kenichi; Yamagishi, Masakazu; Hayashi, Kenshi

    2017-03-01

    It is vital to identify cardiac involvement (CI) in patients with sarcoidosis as the condition could initially lead to sudden cardiac death. Although the T wave amplitude in lead aVR (TWAaVR) is reportedly associated with adverse cardiac events in various cardiovascular diseases, only scarce data are available concerning the utility of lead aVR in identifying CI in patients with sarcoidosis. We retrospectively investigated the diagnostic values of TWAaVR in patients with sarcoidosis in comparison with conventional electrocardiography parameters such as bundle branch block (BBB). From January 2006 to December 2014, 93 consecutive patients with sarcoidosis were enrolled (mean age, 55.7 ± 15.7 years; male, 31 %; cardiac involvement, n = 26). TWAaVR showed the greatest sensitivity (39 %) and specificity (92 %) in distinguishing between sarcoidosis patients with and without CI, at a cutoff value of -0.08 mV. The diagnostic value of BBB for cardiac involvement was significantly improved when combined with TWAaVR (sensitivity: 61-94 %, specificity: 97-89 %, area under the curve: 0.79-0.92, p = 0.018). Multivariate logistic regression analysis indicated that TWAaVR and BBB were independent electrocardiography parameters associated with CI. In summary, we observed that sarcoidosis patients exhibiting a high TWAaVR were likely to have CI. Thus, the application of a combination of BBB with TWAaVR may be useful when screening for CI in sarcoidosis patients.

  12. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  13. Chaos control of Hastings-Powell model by combining chaotic motions.

    PubMed

    Danca, Marius-F; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  14. Chaos control of Hastings-Powell model by combining chaotic motions

    NASA Astrophysics Data System (ADS)

    Danca, Marius-F.; Chattopadhyay, Joydev

    2016-04-01

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.

  15. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  16. Linking ecophysiological modelling with quantitative genetics to support marker-assisted crop design for improved yields of rice (Oryza sativa) under drought stress.

    PubMed

    Gu, Junfei; Yin, Xinyou; Zhang, Chengwei; Wang, Huaqi; Struik, Paul C

    2014-09-01

    Genetic markers can be used in combination with ecophysiological crop models to predict the performance of genotypes. Crop models can estimate the contribution of individual markers to crop performance in given environments. The objectives of this study were to explore the use of crop models to design markers and virtual ideotypes for improving yields of rice (Oryza sativa) under drought stress. Using the model GECROS, crop yield was dissected into seven easily measured parameters. Loci for these parameters were identified for a rice population of 94 introgression lines (ILs) derived from two parents differing in drought tolerance. Marker-based values of ILs for each of these parameters were estimated from additive allele effects of the loci, and were fed to the model in order to simulate yields of the ILs grown under well-watered and drought conditions and in order to design virtual ideotypes for those conditions. To account for genotypic yield differences, it was necessary to parameterize the model for differences in an additional trait 'total crop nitrogen uptake' (Nmax) among the ILs. Genetic variation in Nmax had the most significant effect on yield; five other parameters also significantly influenced yield, but seed weight and leaf photosynthesis did not. Using the marker-based parameter values, GECROS also simulated yield variation among 251 recombinant inbred lines of the same parents. The model-based dissection approach detected more markers than the analysis using only yield per se. Model-based sensitivity analysis ranked all markers for their importance in determining yield differences among the ILs. Virtual ideotypes based on markers identified by modelling had 10-36 % more yield than those based on markers for yield per se. This study outlines a genotype-to-phenotype approach that exploits the potential value of marker-based crop modelling in developing new plant types with high yields. The approach can provide more markers for selection programmes for specific environments whilst also allowing for prioritization. Crop modelling is thus a powerful tool for marker design for improved rice yields and for ideotyping under contrasting conditions. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  18. Gait analysis in children with haemophilia: first Italian experience at the Turin Haemophilia Centre.

    PubMed

    Forneris, E; Andreacchio, A; Pollio, B; Mannucci, C; Franchini, M; Mengoli, C; Pagliarino, M; Messina, M

    2016-05-01

    To investigate the functional status in haemophilia patients referred to an Italian paediatric haemophilia centre using gait analysis, verifying any differences between mild, moderate or severe haemophilia at a functional level. Forty-two patients (age 4-18) presenting to the Turin Paediatric Haemophilia Centre who could walk independently were included. Therapy included prophylaxis (n = 21), on-demand (n = 17) or immune tolerance induction + inhibitor (n = 4). Patients performed a test of gait analysis. Temporal, spatial and kinematic parameters were calculated for patient subgroups by disease severity and background treatment, and compared with normal values. Moderate (35.7%) or severe (64.3%) haemophilia patients showed obvious variations from normal across a variety of temporal and spatial gait analysis parameters, including step speed and length, double support, swing phase, load asymmetry, stance phase, swing phase and speed. Kinematic parameters were characterized by frequent foot external rotation with deficient plantar flexion during the stance phase, retropelvic tilt, impaired power generation distally and reduced ground reaction forces. Both Gait Deviation Index and Gait Profile Score values for severe haemophilia patients indicated abnormal gait parameters, which were worst in patients with a history of past or current use of inhibitors and those receiving on-demand therapy. Functional evaluation identified changes in gait pattern in patients with severe and moderate haemophilia, compared with normal values. Gait analysis may be a useful tool to facilitate early diagnosis of joint damage, prevent haemophilic arthropathy, design a personalized rehabilitative treatment and monitor functional status over time. © 2016 John Wiley & Sons Ltd.

  19. Option B+ for the prevention of mother-to-child transmission of HIV infection in developing countries: a review of published cost-effectiveness analyses.

    PubMed

    Karnon, Jonathan; Orji, Nneka

    2016-10-01

    To review the published literature on the cost effectiveness of Option B+ (lifelong antiretroviral therapy) for preventing mother-to-child transmission (PMTCT) of HIV during pregnancy and breastfeeding to inform decision making in low- and middle-income countries. PubMed, Scopus, Google scholar and Medline were searched to identify studies of the cost effectiveness of the World Health Organization (WHO) treatment guidelines for PMTCT. Study quality was appraised using the consolidated health economic evaluation reporting standards checklist. Eligible studies were reviewed in detail to assess the relevance and impact of alternative evaluation frameworks, assumptions and input parameter values. Five published cost effectiveness analyses of Option B+ for the PMTCT of HIV were identified. The reported cost-effectiveness of Option B+ varies substantially, with the results of different studies implying that Option B+ is dominant (lower costs, greater benefits), cost-effective (additional benefits at acceptable additional costs) or not cost-effective (additional benefits at unacceptable additional costs). This variation is due to significant differences in model structures and input parameter values. Structural differences were observed around the estimation of programme effects on infants, HIV-infected mothers and their HIV negative partners, over multiple pregnancies, as well assumptions regarding routine access to antiretroviral therapies. Significant differences in key input parameters were observed in transmission rates, intervention costs and effects and downstream cost savings. Across five model-based cost-effectiveness analyses of strategies for the PMTCT of HIV, the most comprehensive analysis reported that option B+ is highly likely to be cost-effective. This evaluation may have been overly favourable towards option B+ with respect to some input parameter values, but potentially important additional benefits were omitted. Decision makers might be best advised to review this analysis, with a view to requesting additional analyses of the model to inform local funding decisions around alternative strategies for the PMTCT of HIV. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Development and validation of a habitat suitability model for ...

    EPA Pesticide Factsheets

    We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and salinity) to quantitatively describe habitat suitable for Z. japonica invasion based on ecology and physiology from the primary literature. Habitat suitability is defined with values ranging from zero to one, where one denotes areas most conducive to Z. japonica and zero denotes areas not likely to support Z. japonica growth. The model was applied to Yaquina Bay, Oregon, USA, an area that has well documented Z. japonica expansion over the last two decades. The highest suitability values for Z. japonica occurred in the mid to upper portions of the intertidal zone, with larger expanses occurring in the lower estuary. While the upper estuary did contain suitable habitat, most areas were not as large as in the lower estuary, due to inappropriate depth, a steeply sloping intertidal zone, and lower salinity. The lowest suitability values occurred below the lower intertidal zone, within the Yaquina River channel. The model was validated by comparison to a multi-year time series of Z. japonica maps, revealing a strong predictive capacity. Sensitivity analysis performed to evaluate the contribution of each parameter to the model prediction revealed that depth was the most important factor. Sh

  1. Optimization of Empirical Force Fields by Parameter Space Mapping: A Single-Step Perturbation Approach.

    PubMed

    Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E

    2017-12-12

    A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.

  2. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities

    PubMed Central

    Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690

  3. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities.

    PubMed

    Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.

  4. Kinetics of motility-induced phase separation and swim pressure

    NASA Astrophysics Data System (ADS)

    Patch, Adam; Yllanes, David; Marchetti, M. Cristina

    Active Brownian particles (ABPs) represent a minimal model of active matter consisting of self-propelled spheres with purely repulsive interactions and rotational noise. We correlate the time evolution of the mean pressure towards its steady state value with the kinetics of motility-induced phase separation. For parameter values corresponding to phase separated steady states, we identify two dynamical regimes. The pressure grows monotonically in time during the initial regime of rapid cluster formation, overshooting its steady state value and then quickly relaxing to it, and remains constant during the subsequent slower period of cluster coalescence and coarsening. The overshoot is a distinctive feature of active systems. NSF-DMR-1305184, NSF-DGE-1068780, ACI-1341006, FIS2015-65078-C02, BIFI-ZCAM.

  5. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  6. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  7. Metabolic Tumor Volume and Total Lesion Glycolysis in Oropharyngeal Cancer Treated With Definitive Radiotherapy: Which Threshold Is the Best Predictor of Local Control?

    PubMed

    Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O

    2017-06-01

    In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.

  8. Bifurcation and Stability Analysis of the Equilibrium States in Thermodynamic Systems in a Small Vicinity of the Equilibrium Values of Parameters

    NASA Astrophysics Data System (ADS)

    Barsuk, Alexandr A.; Paladi, Florentin

    2018-04-01

    The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.

  9. Magnetic Resonance Imaging More Accurately Classifies Steatosis and Fibrosis in Patients With Nonalcoholic Fatty Liver Disease Than Transient Elastography.

    PubMed

    Imajo, Kento; Kessoku, Takaomi; Honda, Yasushi; Tomeno, Wataru; Ogawa, Yuji; Mawatari, Hironori; Fujita, Koji; Yoneda, Masato; Taguri, Masataka; Hyogo, Hideyuki; Sumida, Yoshio; Ono, Masafumi; Eguchi, Yuichiro; Inoue, Tomio; Yamanaka, Takeharu; Wada, Koichiro; Saito, Satoru; Nakajima, Atsushi

    2016-03-01

    Noninvasive methods have been evaluated for the assessment of liver fibrosis and steatosis in patients with nonalcoholic fatty liver disease (NAFLD). We compared the ability of transient elastography (TE) with the M-probe, and magnetic resonance elastography (MRE) to assess liver fibrosis. Findings from magnetic resonance imaging (MRI)-based proton density fat fraction (PDFF) measurements were compared with those from TE-based controlled attenuation parameter (CAP) measurements to assess steatosis. We performed a cross-sectional study of 142 patients with NAFLD (identified by liver biopsy; mean body mass index, 28.1 kg/m(2)) in Japan from July 2013 through April 2015. Our study also included 10 comparable subjects without NAFLD (controls). All study subjects were evaluated by TE (including CAP measurements), MRI using the MRE and PDFF techniques. TE identified patients with fibrosis stage ≥2 with an area under the receiver operating characteristic (AUROC) curve value of 0.82 (95% confidence interval [CI]: 0.74-0.89), whereas MRE identified these patients with an AUROC curve value of 0.91 (95% CI: 0.86-0.96; P = .001). TE-based CAP measurements identified patients with hepatic steatosis grade ≥2 with an AUROC curve value of 0.73 (95% CI: 0.64-0.81) and PDFF methods identified them with an AUROC curve value of 0.90 (95% CI: 0.82-0.97; P < .001). Measurement of serum keratin 18 fragments or alanine aminotransferase did not add value to TE or MRI for identifying nonalcoholic steatohepatitis. MRE and PDFF methods have higher diagnostic performance in noninvasive detection of liver fibrosis and steatosis in patients with NAFLD than TE and CAP methods. MRI-based noninvasive assessment of liver fibrosis and steatosis is a potential alternative to liver biopsy in clinical practice. UMIN Clinical Trials Registry No. UMIN000012757. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.

  10. Improving the quantity, quality and transparency of data used to derive radionuclide transfer parameters for animal products. 2. Cow milk.

    PubMed

    Howard, B J; Wells, C; Barnett, C L; Howard, D C

    2017-02-01

    Under the International Atomic Energy Agency (IAEA) MODARIA (Modelling and Data for Radiological Impact Assessments) Programme, there has been an initiative to improve the derivation, provenance and transparency of transfer parameter values for radionuclides from feed to animal products that are for human consumption. A description of the revised MODARIA 2016 cow milk dataset is described in this paper. As previously reported for the MODARIA goat milk dataset, quality control has led to the discounting of some references used in IAEA's Technical Report Series (TRS) report 472 (IAEA, 2010). The number of Concentration Ratio (CR) values has been considerably increased by (i) the inclusion of more literature from agricultural studies which particularly enhanced the stable isotope data of both CR and F m and (ii) by estimating dry matter intake from assumed liveweight. In TRS 472, the data for cow milk were 714 transfer coefficient (F m ) values and 254 CR values describing 31 elements and 26 elements respectively. In the MODARIA 2016 cow milk dataset, F m and CR values are now reported for 43 elements based upon 825 data values for F m and 824 for CR. The MODARIA 2016 cow milk dataset F m values are within an order of magnitude of those reported in TRS 472. Slightly bigger changes are seen in the CR values, but the increase in size of the dataset creates greater confidence in them. Data gaps that still remain are identified for elements with isotopes relevant to radiation protection. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Use of an anaerobic sequencing batch reactor for parameter estimation in modelling of anaerobic digestion.

    PubMed

    Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E

    2004-01-01

    The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.

  12. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  13. Multiple-Shrinkage Multinomial Probit Models with Applications to Simulating Geographies in Public Use Data.

    PubMed

    Burgette, Lane F; Reiter, Jerome P

    2013-06-01

    Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.

  14. Automated evaluation of liver fibrosis in thioacetamide, carbon tetrachloride, and bile duct ligation rodent models using second-harmonic generation/two-photon excited fluorescence microscopy.

    PubMed

    Liu, Feng; Chen, Long; Rao, Hui-Ying; Teng, Xiao; Ren, Ya-Yun; Lu, Yan-Qiang; Zhang, Wei; Wu, Nan; Liu, Fang-Fang; Wei, Lai

    2017-01-01

    Animal models provide a useful platform for developing and testing new drugs to treat liver fibrosis. Accordingly, we developed a novel automated system to evaluate liver fibrosis in rodent models. This system uses second-harmonic generation (SHG)/two-photon excited fluorescence (TPEF) microscopy to assess a total of four mouse and rat models, using chemical treatment with either thioacetamide (TAA) or carbon tetrachloride (CCl 4 ), and a surgical method, bile duct ligation (BDL). The results obtained by the new technique were compared with that using Ishak fibrosis scores and two currently used quantitative methods for determining liver fibrosis: the collagen proportionate area (CPA) and measurement of hydroxyproline (HYP) content. We show that 11 shared morphological parameters faithfully recapitulate Ishak fibrosis scores in the models, with high area under the receiver operating characteristic (ROC) curve (AUC) performance. The AUC values of 11 shared parameters were greater than that of the CPA (TAA: 0.758-0.922 vs 0.752-0.908; BDL: 0.874-0.989 vs 0.678-0.966) in the TAA mice and BDL rat models and similar to that of the CPA in the TAA rat and CCl 4 mouse models. Similarly, based on the trends in these parameters at different time points, 9, 10, 7, and 2 model-specific parameters were selected for the TAA rats, TAA mice, CCl 4 mice, and BDL rats, respectively. These parameters identified differences among the time points in the four models, with high AUC accuracy, and the corresponding AUC values of these parameters were greater compared with those of the CPA in the TAA rat and mouse models (rats: 0.769-0.894 vs 0.64-0.799; mice: 0.87-0.93 vs 0.739-0.836) and similar to those of the CPA in the CCl 4 mouse and BDL rat models. Similarly, the AUC values of 11 shared parameters and model-specific parameters were greater than those of HYP in the TAA rats, TAA mice, and CCl 4 mouse models and were similar to those of HYP in the BDL rat models. The automated evaluation system, combined with 11 shared parameters and model-specific parameters, could specifically, accurately, and quantitatively stage liver fibrosis in animal models.

  15. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  16. Estimation of transversely isotropic material properties from magnetic resonance elastography using the optimised virtual fields method.

    PubMed

    Miller, Renee; Kolipaka, Arunark; Nash, Martyn P; Young, Alistair A

    2018-03-12

    Magnetic resonance elastography (MRE) has been used to estimate isotropic myocardial stiffness. However, anisotropic stiffness estimates may give insight into structural changes that occur in the myocardium as a result of pathologies such as diastolic heart failure. The virtual fields method (VFM) has been proposed for estimating material stiffness from image data. This study applied the optimised VFM to identify transversely isotropic material properties from both simulated harmonic displacements in a left ventricular (LV) model with a fibre field measured from histology as well as isotropic phantom MRE data. Two material model formulations were implemented, estimating either 3 or 5 material properties. The 3-parameter formulation writes the transversely isotropic constitutive relation in a way that dissociates the bulk modulus from other parameters. Accurate identification of transversely isotropic material properties in the LV model was shown to be dependent on the loading condition applied, amount of Gaussian noise in the signal, and frequency of excitation. Parameter sensitivity values showed that shear moduli are less sensitive to noise than the other parameters. This preliminary investigation showed the feasibility and limitations of using the VFM to identify transversely isotropic material properties from MRE images of a phantom as well as simulated harmonic displacements in an LV geometry. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Measuring the benefits of using market based approaches to provide water and sanitation in humanitarian contexts.

    PubMed

    Martin-Simpson, S; Parkinson, J; Katsou, E

    2018-06-15

    The use of cash transfers and market based programming (CT/MBP) to increase the efficiency and effectiveness of emergency responses is gaining prominence in the humanitarian sector. However, there is a lack of existing indicators and methodologies to monitor activities designed to strengthen water and sanitation (WaSH) markets. Gender and vulnerability markers to measure the impact of such activities on different stakeholders is also missing. This study identifies parameters to monitor, evaluate and determine the added value of utilising CT/MBP to achieve WaSH objectives in humanitarian response. The results of the work revealed that CT/MBP can be used to support household, community and market level interventions to effectively reduce transmission of faeco-oral diseases. Efficiency, effectiveness, sustainability, appropriateness and equity were identified as useful parameters which correlated to widely accepted frameworks against which to evaluate humanitarian action. The parameters were found to be directly applicable to the case of increasing demand and supply of point of use water treatment technology for a) disaster resilience activities, and b) post-crisis response. The need for peer review of the parameters and indicators and pilot measurement in humanitarian contexts was recognised. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    PubMed

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.

  19. A Priori Method of Using Photon Activation Analysis to Determine Unknown Trace Element Concentrations in NIST Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Jaromy; Sun Zaijing; Wells, Doug

    2009-03-10

    Photon activation analysis detected elements in two NIST standards that did not have reported concentration values. A method is currently being developed to infer these concentrations by using scaling parameters and the appropriate known quantities within the NIST standard itself. Scaling parameters include: threshold, peak and endpoint energies; photo-nuclear cross sections for specific isotopes; Bremstrahlung spectrum; target thickness; and photon flux. Photo-nuclear cross sections and energies from the unknown elements must also be known. With these quantities, the same integral was performed for both the known and unknown elements resulting in an inference of the concentration of the un-reported elementmore » based on the reported value. Since Rb and Mn were elements that were reported in the standards, and because they had well-identified peaks, they were used as the standards of inference to determine concentrations of the unreported elements of As, I, Nb, Y, and Zr. This method was tested by choosing other known elements within the standards and inferring a value based on the stated procedure. The reported value of Mn in the first NIST standard was 403{+-}15 ppm and the reported value of Ca in the second NIST standard was 87000 ppm (no reported uncertainty). The inferred concentrations were 370{+-}23 ppm and 80200{+-}8700 ppm respectively.« less

  20. Air plasma spray processing and electrochemical characterization of SOFC composite cathodes

    NASA Astrophysics Data System (ADS)

    White, B. D.; Kesler, O.; Rose, Lars

    Air plasma spraying has been used to produce porous composite cathodes containing (La 0.8Sr 0.2) 0.98MnO 3- y (LSM) and yttria-stabilized zirconia (YSZ) for use in solid oxide fuel cells (SOFCs). Preliminary investigations focused on determining the range of plasma conditions under which each of the individual materials could be successfully deposited. A range of conditions was thereby determined that was suitable for the deposition of a composite cathode from pre-mixed LSM and YSZ powders. A number of composite cathodes were produced using different combinations of parameter values within the identified range according to a Uniform Design experimental grid. Coatings were then characterized for composition and microstructure using EDX and SEM. As a result of these tests, combinations of input parameter values were identified that are best suited to the production of coatings with microstructures appropriate for use in SOFC composite cathodes. A selection of coatings representative of the types of observed microstructures were then subjected to electrochemical testing to evaluate the performance of these cathodes. From these tests, it was found that, in general, the coatings that appeared to have the most suitable microstructures also had the highest electrochemical performances, provided that the deposition efficiency of both phases was sufficiently high.

  1. Noise and Dynamical Pattern Selection in Solidification

    NASA Technical Reports Server (NTRS)

    Kurtze, Douglas A.

    1997-01-01

    The overall goal of this project was to understand in more detail how a pattern-forming system can adjust its spacing. "Pattern-forming systems," in this context, are nonequilibrium contina whose state is determined by experimentally adjustable control parameter. Below some critical value of the control system then has available to it a range of linearly stable, spatially periodic steady states, each characterized by a spacing which can lie anywhere within some band of values. These systems like directional solidification, where the solidification front is planar when the ratio of growth velocity to thermal gradient is below its critical value, but takes on a cellular shape above critical. They also include systems without interfaces, such as Benard convection, where it is the fluid velocity field which changes from zero to something spatially periodic as the control parameter is increased through its critical value. The basic question to be addressed was that of how the system chooses one of its myriad possible spacings when the control parameter is above critical, and in particular the role of noise in the selection process. Previous work on explosive crystallization had suggested that one spacing in the range should be preferred, in the sense that weak noise should eventually drive the system to that spacing. That work had also suggested a heuristic argument for identifying the preferred spacing. The project had three main objectives: to understand in more detail how a pattern-forming system can adjust its spacing; to investigate how noise drives a system to its preferred spacing; and to extend the heuristic argument for a preferred spacing in explosive crystallization to other pattern-forming systems.

  2. Contrast-enhanced 3T MR Perfusion of Musculoskeletal Tumours: T1 Value Heterogeneity Assessment and Evaluation of the Influence of T1 Estimation Methods on Quantitative Parameters.

    PubMed

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain

    2017-12-01

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.

  3. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  4. Chaos control of Hastings–Powell model by combining chaotic motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in

    2016-04-15

    In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less

  5. Immunohistological features related to functional impairment in lymphangioleiomyomatosis.

    PubMed

    Nascimento, Ellen Caroline Toledo do; Baldi, Bruno Guedes; Mariani, Alessandro Wasum; Annoni, Raquel; Kairalla, Ronaldo Adib; Pimenta, Suzana Pinheiro; da Silva, Luiz Fernando Ferraz; Carvalho, Carlos Roberto Ribeiro; Dolhnikoff, Marisa

    2018-05-08

    Lymphangioleiomyomatosis (LAM) is a low-grade neoplasm characterized by the pulmonary infiltration of smooth muscle-like cells (LAM cells) and cystic destruction. Patients usually present with airway obstruction in pulmonary function tests (PFTs). Previous studies have shown correlations among histological parameters, lung function abnormalities and prognosis in LAM. We investigated the lung tissue expression of proteins related to the mTOR pathway, angiogenesis and enzymatic activity and its correlation with functional parameters in LAM patients. We analyzed morphological and functional parameters of thirty-three patients. Two groups of disease severity were identified according to FEV1 values. Lung tissue from open biopsies or lung transplants was immunostained for SMA, HMB-45, mTOR, VEGF-D, MMP-9 and D2-40. Density of cysts, density of nodules and protein expression were measured by image analysis and correlated with PFT parameters. There was no difference in the expression of D2-40 between the more severe and the less severe groups. All other immunohistological parameters showed significantly higher values in the more severe group (p ≤ 0.002). The expression of VEGF-D, MMP-9 and mTOR in LAM cells was associated with the density of both cysts and nodules. The density of cysts and nodules as well as the expression of MMP-9 and VEGF-D were associated with the impairment of PFT parameters. Severe LAM represents an active phase of the disease with high expression of VEGF-D, mTOR, and MMP-9, as well as LAM cell infiltration. Our findings suggest that the tissue expression levels of VEGF-D and MMP-9 are important parameters associated with the loss of pulmonary function and could be considered as potential severity markers in open lung biopsies of LAM patients.

  6. Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  7. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    NASA Astrophysics Data System (ADS)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  8. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  9. Echocardiographic predictors of atrial fibrillation recurrence after catheter ablation: A literature review.

    PubMed

    Liżewska-Springer, Aleksandra; Dąbrowska-Kugacka, Alicja; Lewicka, Ewa; Drelich, Łukasz; Królak, Tomasz; Raczak, Grzegorz

    2018-06-20

    Catheter ablation (CA) is a well-known treatment option for patients with symptomatic drug-resistant atrial fibrillation (AF). Multiple factors have been identified to determine AF recurrence after CA, however their predictive value is rather small. Identification of novel predictors of CA outcome is therefore of primary importance to reduce health costs and improve long-term results of this intervention. The recurrence of AF following CA is related to the severity of left ventricular (LV) dysfunction, extend of atrial dilatation and fibrosis. The aim of this paper was to present and discuss the latest studies on utility of echocardiographic parameters in terms of CA effectiveness in patients with paroxysmal and persistent AF. PubMed, Google Scholar, EBSCO databases were searched for studies reporting echocardiographic preprocedural predictors of AF recurrence after CA. LV systolic and diastolic function, as well as atrial size, strain and dyssynchrony were taken into consideration. Twenty one full-text articles were analyzed, including three meta-analyses. Several echocardiographic parameters have been reported to determine a risk of AF recurrence after CA. There are conventional methods that measure left atrial (LA) size and volume, LV ejection fraction, parameters assessing LV diastolic dysfunction, and methods using more innovative technologies based on speckle tracking echocardiography (STE) to determine LA synchrony and strain. Each of these parameters has its own predictive value. Regarding CA effectiveness, every patient has to be evaluated individually to estimate the risk of AF recurrence, optimally using a combination of several echocardiographic parameters.

  10. Impact of initial surface parameters on the final quality of laser micro-polished surfaces

    NASA Astrophysics Data System (ADS)

    Chow, Michael; Bordatchev, Evgueni V.; Knopf, George K.

    2012-03-01

    Laser micro-polishing (LμP) is a new laser-based microfabrication technology for improving surface quality during a finishing operation and for producing parts and surfaces with near-optical surface quality. The LμP process uses low power laser energy to melt a thin layer of material on the previously machined surface. The polishing effect is achieved as the molten material in the laser-material interaction zone flows from the elevated regions to the local minimum due to surface tension. This flow of molten material then forms a thin ultra-smooth layer on the top surface. The LμP is a complex thermo-dynamic process where the melting, flow and redistribution of molten material is significantly influenced by a variety of process parameters related to the laser, the travel motions and the material. The goal of this study is to analyze the impact of initial surface parameters on the final surface quality. Ball-end micromilling was used for preparing initial surface of samples from H13 tool steel that were polished using a Q-switched Nd:YAG laser. The height and width of micromilled scallops (waviness) were identified as dominant parameter affecting the quality of the LμPed surface. By adjusting process parameters, the Ra value of a surface, having a waviness period of 33 μm and a peak-to-valley value of 5.9 μm, was reduced from 499 nm to 301 nm, improving the final surface quality by 39.7%.

  11. Chemical and toxicological evaluation of underground coal gasification (UCG) effluents. The coal rank effect.

    PubMed

    Kapusta, Krzysztof; Stańczyk, Krzysztof

    2015-02-01

    The effect of coal rank on the composition and toxicity of water effluents resulting from two underground coal gasification experiments with distinct coal samples (lignite and hard coal) was investigated. A broad range of organic and inorganic parameters was determined in the sampled condensates. The physicochemical tests were supplemented by toxicity bioassays based on the luminescent bacteria Vibrio fischeri as the test organism. The principal component analysis and Pearson correlation analysis were adopted to assist in the interpretation of the raw experimental data, and the multiple regression statistical method was subsequently employed to enable predictions of the toxicity based on the values of the selected parameters. Significant differences in the qualitative and quantitative description of the contamination profiles were identified for both types of coal under study. Independent of the coal rank, the most characteristic organic components of the studied condensates were phenols, naphthalene and benzene. In the inorganic array, ammonia, sulphates and selected heavy metals and metalloids were identified as the dominant constituents. Except for benzene with its alkyl homologues (BTEX), selected polycyclic aromatic hydrocarbons (PAHs), zinc and selenium, the values of the remaining parameters were considerably greater for the hard coal condensates. The studies revealed that all of the tested UCG condensates were extremely toxic to V. fischeri; however, the average toxicity level for the hard coal condensates was approximately 56% higher than that obtained for the lignite. The statistical analysis provided results supporting that the toxicity of the condensates was most positively correlated with the concentrations of free ammonia, phenols and certain heavy metals. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    NASA Astrophysics Data System (ADS)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  13. Adaptive Scaling of Cluster Boundaries for Large-Scale Social Media Data Clustering.

    PubMed

    Meng, Lei; Tan, Ah-Hwee; Wunsch, Donald C

    2016-12-01

    The large scale and complex nature of social media data raises the need to scale clustering techniques to big data and make them capable of automatically identifying data clusters with few empirical settings. In this paper, we present our investigation and three algorithms based on the fuzzy adaptive resonance theory (Fuzzy ART) that have linear computational complexity, use a single parameter, i.e., the vigilance parameter to identify data clusters, and are robust to modest parameter settings. The contribution of this paper lies in two aspects. First, we theoretically demonstrate how complement coding, commonly known as a normalization method, changes the clustering mechanism of Fuzzy ART, and discover the vigilance region (VR) that essentially determines how a cluster in the Fuzzy ART system recognizes similar patterns in the feature space. The VR gives an intrinsic interpretation of the clustering mechanism and limitations of Fuzzy ART. Second, we introduce the idea of allowing different clusters in the Fuzzy ART system to have different vigilance levels in order to meet the diverse nature of the pattern distribution of social media data. To this end, we propose three vigilance adaptation methods, namely, the activation maximization (AM) rule, the confliction minimization (CM) rule, and the hybrid integration (HI) rule. With an initial vigilance value, the resulting clustering algorithms, namely, the AM-ART, CM-ART, and HI-ART, can automatically adapt the vigilance values of all clusters during the learning epochs in order to produce better cluster boundaries. Experiments on four social media data sets show that AM-ART, CM-ART, and HI-ART are more robust than Fuzzy ART to the initial vigilance value, and they usually achieve better or comparable performance and much faster speed than the state-of-the-art clustering algorithms that also do not require a predefined number of clusters.

  14. Potential application of item-response theory to interpretation of medical codes in electronic patient records

    PubMed Central

    2011-01-01

    Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509

  15. Pulse Doppler ultrasound as a tool for the diagnosis of chronic testicular dysfunction in stallions

    PubMed Central

    Ortiz-Rodriguez, Jose M.; Anel-Lopez, Luis; Martín-Muñoz, Patricia; Álvarez, Mercedes; Gaitskell-Phillips, Gemma; Anel, Luis; Rodríguez-Medina, Pedro; Peña, Fernando J.

    2017-01-01

    Testicular function is particularly susceptible to vascular insult, resulting in a negative impact on sperm production and quality of the ejaculate. A prompt diagnosis of testicular dysfunction enables implementation of appropriate treatment, hence improving fertility forecasts for stallions. The present research aims to: (1) assess if Doppler ultrasonography is a good tool to diagnose stallions with testicular dysfunction; (2) to study the relationship between Doppler parameters of the testicular artery and those of sperm quality assessed by flow cytometry and (3) to establish cut off values to differentiate fertile stallions from those with pathologies causing testicular dysfunction. A total of 10 stallions (n: 7 healthy stallions and n: 3 sub-fertile stallions) were used in this study. Two ejaculates per stallion were collected and preserved at 5°C in a commercial extender. The semen was evaluated at T0, T24 and T48h by flow cytometry. Integrity and viability of sperm (YoPro®-1/EthD-1), mitochondrial activity (MitoTracker® Deep Red FM) and the DNA fragmentation index (Sperm Chromatin Structure Assay) were assessed. Doppler parameters were measured at three different locations on the testicular artery (Supratesticular artery (SA); Capsular artery (CA) and Intratesticular artery (IA)). The Doppler parameters calculated were: Resistive Index (RI), Pulsatility Index (PI), Peak Systolic Velocity (PSV), End Diastolic Velocity (EDV), Time Average Maximum Velocity (TAMV), Total Arterial Blood Flow (TABF) and TABF rate. The capsular artery was the most reliable location to carry out spectral Doppler assessment, since blood flow parameters of this artery were most closely correlated with sperm quality parameters. Significant differences in all the Doppler parameters studied were observed between fertile and subfertile stallions (p ≤ 0.05). The principal components analysis assay determined that fertile stallions are characterized by high EDV, TAMV, TABF and TABF rate values (high vascular perfusion). In contrast, subfertile stallions tend to present high values of PI and RI (high vascular resistance). The ROC curves revealed that the best Doppler parameters to predict sperm quality in stallions were: Doppler velocities (PSV, EDV and TAMV), the diameter of the capsular artery and TABF parameters (tissue perfusion parameters). Cut off values were established using a Youden´s Index to identify fertile stallions from stallions with testicular dysfunction. Spectral Doppler ultrasound is a good predictive tool for sperm quality since correlations were determined among Doppler parameters and markers of sperm quality. Doppler ultrasonography could be a valuable diagnostic tool for use by clinical practitioners for the diagnosis of stallions with testicular dysfunction and could be a viable alternative to invasive procedures traditionally used for diagnosis of sub-fertility disorders. PMID:28558006

  16. HRV Analysis to Identify Stages of Home-based Telerehabilitation Exercise.

    PubMed

    Jeong, In Cheol; Finkelstein, Joseph

    2014-01-01

    Spectral analysis of heart rate variability (HRV) has been widely used to investigate activity of autonomous nervous system. Previous studies demonstrated potential of analysis of short-term sequences of heart rate data in a time domain for continuous monitoring of levels of physiological stress however the value of HRV parameters in frequency domain for monitoring cycling exercise has not been established. The goal of this study was to assess whether HRV parameters in frequency domain differ depending on a stage of cycling exercise. We compared major HRV parameters in high, low and very low frequency ranges during rest, height of exercise, and recovery during cycling exercise. Our results indicated responsiveness of frequency-domain indices to different phases of cycling exercise program and their potential in monitoring autonomic balance and stress levels as a part of a tailored home-based telerehabilitation program.

  17. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  18. AAA gunnermodel based on observer theory. [predicting a gunner's tracking response

    NASA Technical Reports Server (NTRS)

    Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.

    1978-01-01

    The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.

  19. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  20. An inverse problem for a mathematical model of aquaponic agriculture

    NASA Astrophysics Data System (ADS)

    Bobak, Carly; Kunze, Herb

    2017-01-01

    Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.

  1. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  2. Sensitivity Analysis of the Bone Fracture Risk Model

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including environmental factors, factors associated with the fall event, mass and anthropometric values of the astronaut, BMD characteristics, characteristics of the relationship between BMD and bone strength and bone fracture characteristics. The uncertainty in these factors is captured through the use of parameter distributions and the fracture predictions are probability distributions with a mean value and an associated uncertainty. To determine parameter sensitivity, a correlation coefficient is found between the sample set of each model parameter and the calculated fracture probabilities. Each parameters contribution to the variance is found by squaring the correlation coefficients, dividing by the sum of the squared correlation coefficients, and multiplying by 100. Results: Sensitivity analyses of BFxRM simulations of preflight, 0 days post-flight and 365 days post-flight falls onto the hip revealed a subset of the twelve factors within the model which cause the most variation in the fracture predictions. These factors include the spring constant used in the hip biomechanical model, the midpoint FRI parameter within the equation used to convert FRI to fracture probability and preflight BMD values. Future work: Plans are underway to update the BFxRM by incorporating bone strength information from finite element models (FEM) into the bone strength portion of the BFxRM. Also, FEM bone strength information along with fracture outcome data will be incorporated into the FRI to fracture probability.

  3. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  4. The Value of Certainty (Invited)

    NASA Astrophysics Data System (ADS)

    Barkstrom, B. R.

    2009-12-01

    It is clear that Earth science data are valued, in part, for their ability to provide some certainty about the past state of the Earth and about its probable future states. We can sharpen this notion by using seven categories of value ● Warning Service, requiring latency of three hours or less, as well as uninterrupted service ● Information Service, requiring latency less than about two weeks, as well as unterrupted service ● Process Information, requiring ability to distinguish between alternative processes ● Short-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of five years or less, e.g. crop insurance ● Mid-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of twenty-five years or less, e.g. power plant siting ● Long-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of a century or less, e.g. one hundred year flood planning ● Doomsday Statistics, requiring ability to construct a reliable statistical record that is useful for reducing the impact of `doomsday' scenarios While the first two of these categories place high value on having an uninterrupted flow of information, and the third places value on contributing to our understanding of physical processes, it is notable that the last four may be placed on a common footing by considering the ability of observations to reduce uncertainty. Quantitatively, we can often identify metrics for parameters of interest that are fairly simple. For example, ● Detection of change in the average value of a single parameter, such as global temperature ● Detection of a trend, whether linear or nonlinear, such as the trend in cloud forcing known as cloud feedback ● Detection of a change in extreme value statistics, such as flood frequency or drought severity For such quantities, we can quantify uncertainty in terms of the entropy which is calculated by creating a set of discrete bins for the value and then using error estimates to assign probabilities, pi, to each bin. The entropy, H, is simply H = ∑i pi log2(1/pi) The value of a new set of observations is the information gain, I, which is I = Hprior - Hposterior The probability distributions that appear in this calculation depend on rigorous evaluation of errors in the observations. While direct estimates of the monetary value of data that could be used in budget prioritizations may not capture the value of data to the scientific community, it appears that the information gain may be a useful start in providing a `common currency' for evaluating projects that serve very different communities. In addition, from the standpoint of governmental accounting, it appears reasonable to assume that much of the expense for scientific data become sunk costs shortly after operations begin and that the real, long-term value is created by the effort scientists expend in creating the software that interprets the data and in the effort expended in calibration and validation. These efforts are the ones that directly contribute to the information gain that provides the value of these data.

  5. Computational Investigations in Rectangular Convergent and Divergent Ribbed Channels

    NASA Astrophysics Data System (ADS)

    Sivakumar, Karthikeyan; Kulasekharan, N.; Natarajan, E.

    2018-05-01

    Computational investigations on the rib turbulated flow inside a convergent and divergent rectangular channel with square ribs of different rib heights and different Reynolds numbers (Re=20,000, 40,000 and 60,000). The ribs were arranged in a staggered fashion between the upper and lower surfaces of the test section. Computational investigations are carried out using computational fluid dynamic software ANSYS Fluent 14.0. Suitable solver settings like turbulence models were identified from the literature and the boundary conditions for the simulations on a solution of independent grid. Computations were carried out for both convergent and divergent channels with 0 (smooth duct), 1.5, 3, 6, 9 and 12 mm rib heights, to identify the ribbed channel with optimal performance, assessed using a thermo hydraulic performance parameter. The convergent and divergent rectangular channels show higher Nu values than the standard correlation values.

  6. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  7. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  8. Body Mass Index Is Better than Other Anthropometric Indices for Identifying Dyslipidemia in Chinese Children with Obesity

    PubMed Central

    Jing, Jin; Ma, Jun; Chen, Yajun; Li, Xiuhong; Yang, Wenhan; Guo, Li; Jin, Yu

    2016-01-01

    Background Body mass index (BMI), waist circumference (WC), and waist-to-hip ratio (WHR) are used in screening and predicting obesity in adults. However, the best identifier of metabolic complications in children with obesity remains unclear. This study evaluated lipid profile distribution and investigated the best anthropometric parameter in association with lipid disorders in children with obesity. Methods A total of 2243 school children aged 7–17 years were enrolled in Guangzhou, China, in 2014. The anthropometric indices and lipid profiles were measured. Dyslipidemia was defined according to the US Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents. The association between anthropometry (BMI, WC, and WHR) and lipid profile values was examined using chi-square analysis and discriminant function analysis. Information about demography, physical activity, and dietary intake was provided by the participant children and their parents. Results Children aged 10–14 and 15–17 years old generally had higher triglyceride values but lower median concentration of total cholesterol, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol compared with children aged 7–9 years old (all P < 0.001). These lipid parameters fluctuated in children aged 10–14 years old. The combination of age groups, BMI, WC and WHR achieved 65.1% accuracy in determining dyslipidemic disorders. BMI correctly identified 77% of the total dyslipidemic disorders in obese children, which was higher than that by WHR (70.8%) (P< 0.05). Conclusion The distribution of lipid profiles in Chinese children differed between younger and older age groups, and the tendency of these lipid levels remarkably fluctuated during 10 to 14 years old. BMI had better practical utility in identifying dyslipidemia among school-aged children with obesity compared with other anthropometric measures. PMID:26963377

  9. Substituting values for censored data from Texas, USA, reservoirs inflated and obscured trends in analyses commonly used for water quality target development.

    PubMed

    Grantz, Erin; Haggard, Brian; Scott, J Thad

    2018-06-12

    We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.

  10. Body Mass Index Is Better than Other Anthropometric Indices for Identifying Dyslipidemia in Chinese Children with Obesity.

    PubMed

    Zhu, Yanna; Shao, Zixian; Jing, Jin; Ma, Jun; Chen, Yajun; Li, Xiuhong; Yang, Wenhan; Guo, Li; Jin, Yu

    2016-01-01

    Body mass index (BMI), waist circumference (WC), and waist-to-hip ratio (WHR) are used in screening and predicting obesity in adults. However, the best identifier of metabolic complications in children with obesity remains unclear. This study evaluated lipid profile distribution and investigated the best anthropometric parameter in association with lipid disorders in children with obesity. A total of 2243 school children aged 7-17 years were enrolled in Guangzhou, China, in 2014. The anthropometric indices and lipid profiles were measured. Dyslipidemia was defined according to the US Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents. The association between anthropometry (BMI, WC, and WHR) and lipid profile values was examined using chi-square analysis and discriminant function analysis. Information about demography, physical activity, and dietary intake was provided by the participant children and their parents. Children aged 10-14 and 15-17 years old generally had higher triglyceride values but lower median concentration of total cholesterol, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol compared with children aged 7-9 years old (all P < 0.001). These lipid parameters fluctuated in children aged 10-14 years old. The combination of age groups, BMI, WC and WHR achieved 65.1% accuracy in determining dyslipidemic disorders. BMI correctly identified 77% of the total dyslipidemic disorders in obese children, which was higher than that by WHR (70.8%) (P< 0.05). The distribution of lipid profiles in Chinese children differed between younger and older age groups, and the tendency of these lipid levels remarkably fluctuated during 10 to 14 years old. BMI had better practical utility in identifying dyslipidemia among school-aged children with obesity compared with other anthropometric measures.

  11. Factors associated with fecal incontinence in women with lower urinary tract symptoms.

    PubMed

    Chang, Ting-Chen; Chang, Shiow-Ru; Hsiao, Sheng-Mou; Hsiao, Chin-Fen; Chen, Chi-Hau; Lin, Ho-Hsiung

    2013-01-01

    The aim of this study was to identify the factors associated with fecal incontinence in female patients with lower urinary tract symptoms.   Data regarding clinical and urodynamic parameters and history of fecal incontinence of 1334 women with lower urinary tract symptoms who had previously undergone urodynamic evaluation were collected and subjected to univariate, multivariate, and receiver-operator characteristic curve analysis to identify significant associations between these parameters and fecal incontinence.   Multivariate analysis identified age (odds ratio [OR]=1.03, 95% confidence interval [CI]=1.01-1.05, P=0.005), presence of diabetes (OR=2.10, 95%CI=1.22-3.61, P=0.007), presence of urodynamic stress incontinence (OR=1.90, 95%CI=1.24-2.91, P=0.003), pad weight (OR=1.01, 95%CI=1.00-1.01, P=0.04), and detrusor pressure at maximum flow (OR=1.02, 95%CI=1.01-1.03, P=0.003) as independent risk factors for fecal incontinence. Receiver-operator characteristic curve analysis identified age≥55years, detrusor pressure at maximum flow≥35 cmH(2) O, and pad weight≥15g as having positive predictive values of 11.4%, 11.5%, and 12.4%, respectively, thus indicating that they are the most predictive values in concomitant fecal incontinence.   Detrusor pressure at maximum flow and pad weight may be associated with fecal incontinence in female patients with lower urinary tract symptoms, but require confirmation as indicators by further study before their use as screening tools. © 2012 The Authors. Journal of Obstetrics and Gynaecology Research © 2012 Japan Society of Obstetrics and Gynecology.

  12. Brownian motion model with stochastic parameters for asset prices

    NASA Astrophysics Data System (ADS)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  13. Food Waste Composting Study from Makanan Ringan Mas

    NASA Astrophysics Data System (ADS)

    Kadir, A. A.; Ismail, S. N. M.; Jamaludin, S. N.

    2016-07-01

    The poor management of municipal solid waste in Malaysia has worsened over the years especially on food waste. Food waste represents almost 60% of the total municipal solid waste disposed in the landfill. Composting is one of low cost alternative method to dispose the food waste. This study is conducted to compost the food waste generation in Makanan Ringan Mas, which is a medium scale industry in Parit Kuari Darat due to the lack knowledge and exposure of food waste recycling practice. The aim of this study is to identify the physical and chemical parameters of composting food waste from Makanan Ringan Mas. The physical parameters were tested for temperature and pH value and the chemical parameter are Nitrogen, Phosphorus and Potassium. In this study, backyard composting was conducted with 6 reactors. Tapioca peel was used as fermentation liquid and soil and coconut grated were used as the fermentation bed. Backyard composting was conducted with six reactors. The overall results from the study showed that the temperature of the reactors were within the range which are from 30° to 50°C. The result of this study revealed that all the reactors which contain processed food waste tend to produce pH value within the range of 5 to 6 which can be categorized as slightly acidic. Meanwhile, the reactors which contained raw food waste tend to produce pH value within the range of 7 to 8 which can be categorized as neutral. The highest NPK obtained is from Reactor B that process only raw food waste. The average value of Nitrogen is 48540 mg/L, Phosphorus is 410 mg/L and Potassium is 1550 mg/L. From the comparison with common chemical fertilizer, it shows that NPK value from the composting are much lower than NPK of the common chemical fertilizer. However, comparison with NPK of organic fertilizer shown only slightly difference value in NPK.

  14. Shapes, rotation, and pole solutions of the selected Hilda and Trojan asteroids

    NASA Astrophysics Data System (ADS)

    Gritsevich, Maria; Sonnett, Sarah; Torppa, Johanna; Mainzer, Amy; Muinonen, Karri; Penttilä, Antti; Grav, Thomas; Masiero, Joseph; Bauer, James; Kramer, Emily

    2017-04-01

    Binary asteroid systems contain key information about the dynamical and chemical environments in which they formed. For example, determining the formation environments of Trojan and Hilda asteroids (in 1:1 and 3:2 mean-motion resonance with Jupiter, respectively) will provide critical constraints on how small bodies and the planets that drive their migration must have moved throughout Solar System history, see e.g. [1-3]. Therefore, identifying and characterizing binary asteroids within the Trojan and Hilda populations could offer a powerful means of discerning between Solar System evolution models. Dozens of possibly close or contact binary Trojans and Hildas were identified within the data obtained by NEOWISE [4]. Densely sampled light curves of these candidate binaries have been obtained in order to resolve rotational light curve features that are indicative of binarity (e.g., [5-7]). We present analysis of the shapes, rotation, and pole solutions of some of the follow-up targets observed with optical ground-based telescopes. For modelling the asteroid photometric properties, we use parameters describing the shape, surface light scattering properties and spin state of the asteroid. Scattering properties of the asteroid surface are modeled using a two parameter H-G12 magnitude system. Determination of the initial best-fit parameters is carried out by first using a triaxial ellipsoid shape model, and scanning over the period values and spin axis orientations, while fitting the other parameters, after which all parameters were fitted, taking the initial values for spin properties from the spin scanning. In addition to the best-fit parameters, we also provide the distribution of the possible solution, which should cover the inaccuracies of the solution, caused by the observing errors and model. The distribution of solutions is generated by Markov-Chain Monte Carlo sampling the spin and shape model parameters, using both an ellipsoid shape model and a convex model, Gaussian curvature of which is defined as a spherical harmonics series [8]. References: [1] Marzari F. and Scholl H. (1998), A&A, 339, 278. [2] Morbidelli A. et al. (2005), Nature, 435, 462. [3] Nesvorny D. et al. (2013), ApJ, 768, 45. [4] Sonnett S. et al. (2015), ApJ, 799, 191. [5] Behrend R. et al. (2006), A&A, 446, 1177. [6] Lacerda P. and Jewitt D. C. (2007), AJ, 133, 1393. [7] Oey J. (2016), MPB, 43, 45. [8] Muinonen et al., ACM 2017.

  15. Multiverse understanding of cosmological coincidences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Hall, Lawrence J.; Nomura, Yasunori

    2009-09-15

    There is a deep cosmological mystery: although dependent on very different underlying physics, the time scales of structure formation, of galaxy cooling (both radiatively and against the CMB), and of vacuum domination do not differ by many orders of magnitude, but are all comparable to the present age of the universe. By scanning four landscape parameters simultaneously, we show that this quadruple coincidence is resolved. We assume only that the statistical distribution of parameter values in the multiverse grows towards certain catastrophic boundaries we identify, across which there are drastic regime changes. We find order-of-magnitude predictions for the cosmological constant,more » the primordial density contrast, the temperature at matter-radiation equality, the typical galaxy mass, and the age of the universe, in terms of the fine structure constant and the electron, proton and Planck masses. Our approach permits a systematic evaluation of measure proposals; with the causal patch measure, we find no runaway of the primordial density contrast and the cosmological constant to large values.« less

  16. Time-dependent fermentation control strategies for enhancing synthesis of marine bacteriocin 1701 using artificial neural network and genetic algorithm.

    PubMed

    Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan

    2013-06-01

    The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A modified PATH algorithm rapidly generates transition states comparable to those found by other well established algorithms

    PubMed Central

    Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.

    2016-01-01

    PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584

  18. Minimization of Ohmic Losses for Domain Wall Motion in a Ferromagnetic Nanowire

    NASA Astrophysics Data System (ADS)

    Tretiakov, O. A.; Liu, Y.; Abanov, Ar.

    2010-11-01

    We study current-induced domain-wall motion in a narrow ferromagnetic wire. We propose a way to move domain walls with a resonant time-dependent current which dramatically decreases the Ohmic losses in the wire and allows driving of the domain wall with higher speed without burning the wire. For any domain-wall velocity we find the time dependence of the current needed to minimize the Ohmic losses. Below a critical domain-wall velocity specified by the parameters of the wire the minimal Ohmic losses are achieved by dc current. Furthermore, we identify the wire parameters for which the losses reduction from its dc value is the most dramatic.

  19. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach

    PubMed Central

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.

    2015-01-01

    Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881

  20. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach.

    PubMed

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R

    2015-09-15

    Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.

  1. Evaluation of spectroscopic databases through radiative transfer simulations compared to observations. Application to the validation of GEISA 2015 with IASI and TCCON

    NASA Astrophysics Data System (ADS)

    Armante, Raymond; Scott, Noelle; Crevoisier, Cyril; Capelle, Virginie; Crepeau, Laurent; Jacquinet, Nicole; Chédin, Alain

    2016-09-01

    The quality of spectroscopic parameters that serve as input to forward radiative transfer models are essential to fully exploit remote sensing of Earth atmosphere. However, the process of updating spectroscopic databases in order to provide the users with a database that insures an optimal characterization of spectral properties of molecular absorption for radiative transfer modeling is challenging. The evaluation of the databases content and the underlying choices made by the managing team is thus a crucial step. Here, we introduce an original and powerful approach for evaluating spectroscopic parameters: the Spectroscopic Parameters And Radiative Transfer Evaluation (SPARTE) chain. The SPARTE chain relies on the comparison between forward radiative transfer simulations made by the 4A radiative transfer model and observations of spectra made from various observations collocated over several thousands of well-characterized atmospheric situations. Averaging the resulting 'calculated-observed spectral' residuals minimizes the random errors coming from both the radiometric noise of the instruments and the imperfect description of the atmospheric state. The SPARTE chain can be used to evaluate any spectroscopic databases, from the visible to the microwave, using any type of remote sensing observations (ground-based, airborne or space-borne). We show that the comparison of the shape of the residuals enables: (i) identifying incorrect line parameters (line position, intensity, width, pressure shift, etc.), even for molecules for which interferences between the lines have to be taken into account; (ii) proposing revised values, in cooperation with contributing teams; and (iii) validating the final updated parameters. In particular, we show that the simultaneous availability of two databases such as GEISA and HITRAN helps identifying remaining issues in each database. The SPARTE chain has been here applied to the validation of the update of GEISA-2015 in 2 spectral regions of particular interest for several currently exploited or planned Earth space missions: the thermal infrared domain and the short-wave infrared domain, for which observations from the space-borne IASI instrument and from the ground-based FTS instruments at the Parkfalls TCCON site are used respectively. Main results include: (i) the validation of the positions and intensities of line parameters, with overall significantly lower residuals for GEISA-2015 than for GEISA-2011 and (iii) the validation of the choice made on the parameters (such as pressure shift and air-broadened width) which has not been given by the provider but completed by ourselves. For example, comparisons between residuals obtained with GEISA-2015 and HITRAN-2012 have highlighted a specific issue with some HWHM values in the latter that can be clearly identified on the 'calculated-observed' residuals.

  2. Targeted Proteomics-Driven Computational Modeling of Macrophage S1P Chemosensing*

    PubMed Central

    Manes, Nathan P.; Angermann, Bastian R.; Koppenol-Raab, Marijke; An, Eunkyung; Sjoelund, Virginie H.; Sun, Jing; Ishii, Masaru; Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra

    2015-01-01

    Osteoclasts are monocyte-derived multinuclear cells that directly attach to and resorb bone. Sphingosine-1-phosphate (S1P)1 regulates bone resorption by functioning as both a chemoattractant and chemorepellent of osteoclast precursors through two G-protein coupled receptors that antagonize each other in an S1P-concentration-dependent manner. To quantitatively explore the behavior of this chemosensing pathway, we applied targeted proteomics, transcriptomics, and rule-based pathway modeling using the Simmune toolset. RAW264.7 cells (a mouse monocyte/macrophage cell line) were used as model osteoclast precursors, RNA-seq was used to identify expressed target proteins, and selected reaction monitoring (SRM) mass spectrometry using internal peptide standards was used to perform absolute abundance measurements of pathway proteins. The resulting transcript and protein abundance values were strongly correlated. Measured protein abundance values, used as simulation input parameters, led to in silico pathway behavior matching in vitro measurements. Moreover, once model parameters were established, even simulated responses toward stimuli that were not used for parameterization were consistent with experimental findings. These findings demonstrate the feasibility and value of combining targeted mass spectrometry with pathway modeling for advancing biological insight. PMID:26199343

  3. Carboxyhaemoglobin and pulmonary epithelial permeability in man.

    PubMed Central

    Jones, J G; Minty, B D; Royston, D; Royston, J P

    1983-01-01

    The effect of cigarette smoke exposure on pulmonary epithelial permeability was studied in 45 smokers and 22 non-smokers. An index of cigarette smoke exposure was obtained from the carboxyhaemoglobin concentration (HbCO%). Pulmonary epithelial permeability was proportional to the half-time clearance rate of technetium-99m-labelled diethylene triamine pentacetate (99mTc DTPA) from lung to blood (T1/2LB). The relationship between T1/2LB and HbCO% was hyperbolic in form and the data could be fitted to the quadratic formula (formula; see text) where the parameters a0, a1, and a2 represent respectively the asymptotic T1/2LB value at large carboxyhaemoglobin values and the slope and shape of the curve. The values of these parameters were a0 4.4 (2.6), a1 = 77.8 (15.5), and a2 -25.5 (9.7) (SE). This is the first demonstration of a dose-response relationship between carboxyhaemoglobin and an increased permeability of the lungs in man and provides a technique for identifying the roles of carbon monoxide and other cigarette smoke constituents in causing increased pulmonary epithelial permeability. PMID:6344310

  4. Could CT screening for lung cancer ever be cost effective in the United Kingdom?

    PubMed Central

    Whynes, David K

    2008-01-01

    Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756

  5. Utility of a routine urinalysis in children who require clean intermittent catheterization.

    PubMed

    Forster, C S; Haslam, D B; Jackson, E; Goldstein, S L

    2017-10-01

    Children who require clean intermittent catheterization (CIC) frequently have positive urine cultures. However, diagnosing a urinary tract infection (UTI) can be difficult, as there are no standardized criteria. Routine urinalysis (UA) has good predictive accuracy for UTI in the general pediatric population, but data are limited on the utility of routine UA in the population of children who require CIC. To determine the utility of UA parameters (e.g. leukocyte esterase, nitrites, and pyuria) to predict UTI in children who require CIC, and identify a composite UA that has maximal predictive accuracy for UTI. A cross-sectional study of 133 children who required CIC, and had a UA and urine culture sent as part of standard of care. Patients in the no-UTI group all had UA and urine cultures sent as part of routine urodynamics, and were asymptomatic. Patients included in the UTI group had growth of ≥50,000 colony-forming units/ml of a known uropathogen on urine culture, in addition to two or more of the following symptoms: fever, abdominal pain, back pain, foul-smelling urine, new or worse incontinence, and pain with catheterization. Categorical data were compared using Chi-squared test, and continuous data were compared with Student's t-test. Sensitivity, specificity, and positive and negative predictive values were calculated for individual UA parameters, as well as the composite UA. Logistic regression was performed on potential composite UA models to identify the model that best fit the data. There was a higher proportion of patients in the no-UTI group with negative leukocyte esterase compared with the UTI group. There was a higher proportion of patients with UTI who had large leukocyte esterase and positive nitrites compared with the no-UTI group (Summary Figure). There was no between-group difference in urinary white blood cells. Positive nitrites were the most specific (84.4%) for UTI. None of the parameters had a high positive predictive value, while all had high negative predictive values. The composite model with the best Akaike information criterion was >10 urinary white blood cells and either moderate or large leukocyte esterase, which had a positive predictive value of 33.3 and a negative predictive value of 90.4. Routine UA had limited sensitivity, but moderate specificity, in predicting UTI in children who required CIC. The composite UA and moderate or large leukocyte esterase both had good negative predictive values for the outcome of UTI. Copyright © 2017 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  6. A hadronic origin for ultra-high-frequency-peaked BL Lac objects

    NASA Astrophysics Data System (ADS)

    Cerruti, M.; Zech, A.; Boisson, C.; Inoue, S.

    2015-03-01

    Current Cherenkov telescopes have identified a population of ultra-high-frequency peaked BL Lac objects (UHBLs), also known as extreme blazars, that exhibit exceptionally hard TeV spectra, including 1ES 0229+200, 1ES 0347-121, RGB J0710+591, 1ES 1101-232, and 1ES 1218+304. Although one-zone synchrotron-self-Compton (SSC) models have been generally successful in interpreting the high-energy emission observed in other BL Lac objects, they are problematic for UHBLs, necessitating very large Doppler factors and/or extremely high minimum Lorentz factors of the emitting leptonic population. In this context, we have investigated alternative scenarios where hadronic emission processes are important, using a newly developed (lepto-)hadronic numerical code to systematically explore the physical parameters of the emission region that reproduces the observed spectra while avoiding the extreme values encountered in pure SSC models. Assuming a fixed Doppler factor δ = 30, two principal parameter regimes are identified, where the high-energy emission is due to: (1) proton-synchrotron radiation, with magnetic fields B ˜ 1-100 G and maximum proton energies Ep; max ≲ 1019 eV; and (2) synchrotron emission from p-γ-induced cascades as well as SSC emission from primary leptons, with B ˜ 0.1-1 G and Ep; max ≲ 1017 eV. This can be realized with plausible, sub-Eddington values for the total (kinetic plus magnetic) power of the emitting plasma, in contrast to hadronic interpretations for other blazar classes that often warrant highly super-Eddington values.

  7. The value of iodide as a parameter in the chemical characterisation of groundwaters

    NASA Astrophysics Data System (ADS)

    Lloyd, J. W.; Howard, K. W. F.; Pacey, N. R.; Tellam, J. H.

    1982-06-01

    Brackish and saline groundwaters can severely constrain the use of fresh groundwaters. Their chemical characterisation is important in understanding the hydraulic conditions controlling their presence in an aquifer. Major ions are frequently of limited value but minor ions can be used. Iodide in groundwater is particularly significant in many environments due to the presence of soluble iodine in aquifer matrix materials. Iodide is found in groundwaters in parts of the English Chalk aquifer in concentrations higher than are present in modern seawater. Its presence is considered as a indication of groundwater residence and is of use in the characterisation of fresh as well as saline waters. Under certain circumstances modern seawater intrusion into aquifers along English estuaries produces groundwaters which are easily identified due to iodide enrichment from estuarine muds. In other environments iodide concentrations are of value in distinguishing between groundwaters in limestones and shaly gypsiferous rocks as shown by a study in Qatar, while in an alluvial aquifer study in Peru iodide has been used to identify groundwaters entering the aquifer from adjacent granodiorites.

  8. Application of a quality by design approach to the cell culture process of monoclonal antibody production, resulting in the establishment of a design space.

    PubMed

    Nagashima, Hiroaki; Watari, Akiko; Shinoda, Yasuharu; Okamoto, Hiroshi; Takuma, Shinya

    2013-12-01

    This case study describes the application of Quality by Design elements to the process of culturing Chinese hamster ovary cells in the production of a monoclonal antibody. All steps in the cell culture process and all process parameters in each step were identified by using a cause-and-effect diagram. Prospective risk assessment using failure mode and effects analysis identified the following four potential critical process parameters in the production culture step: initial viable cell density, culture duration, pH, and temperature. These parameters and lot-to-lot variability in raw material were then evaluated by process characterization utilizing a design of experiments approach consisting of a face-centered central composite design integrated with a full factorial design. Process characterization was conducted using a scaled down model that had been qualified by comparison with large-scale production data. Multivariate regression analysis was used to establish statistical prediction models for performance indicators and quality attributes; with these, we constructed contour plots and conducted Monte Carlo simulation to clarify the design space. The statistical analyses, especially for raw materials, identified set point values, which were most robust with respect to the lot-to-lot variability of raw materials while keeping the product quality within the acceptance criteria. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  9. Focused ultrasound-mediated noninvasive blood-brain barrier modulation: preclinical examination of efficacy and safety in various sonication parameters.

    PubMed

    Shin, Jaewoo; Kong, Chanho; Cho, Jae Sung; Lee, Jihyeon; Koh, Chin Su; Yoon, Min-Sik; Na, Young Cheol; Chang, Won Seok; Chang, Jin Woo

    2018-02-01

    OBJECTIVE The application of pharmacological therapeutics in neurological disorders is limited by the ability of these agents to penetrate the blood-brain barrier (BBB). Focused ultrasound (FUS) has recently gained attention for its potential application as a method for locally opening the BBB and thereby facilitating drug delivery into the brain parenchyma. However, this method still requires optimization to maximize its safety and efficacy for clinical use. In the present study, the authors examined several sonication parameters of FUS influencing BBB opening in small animals. METHODS Changes in BBB permeability were observed during transcranial sonication using low-intensity FUS in 20 adult male Sprague-Dawley rats. The authors examined the effects of FUS sonication with different sonication parameters, varying acoustic pressure, center frequency, burst duration, microbubble (MB) type, MB dose, pulse repetition frequency (PRF), and total exposure time. The focal region of BBB opening was identified by Evans blue dye. Additionally, H & E staining was used to identify blood vessel damage. RESULTS Acoustic pressure amplitude and burst duration were closely associated with enhancement of BBB opening efficiency, but these parameters were also highly correlated with tissue damage in the sonicated region. In contrast, MB types, MB dose, total exposure time, and PRF had an influence on BBB opening without conspicuous tissue damage after FUS sonication. CONCLUSIONS The study aimed to identify these influential conditions and provide safety and efficacy values for further studies. Future work based on the current results is anticipated to facilitate the implementation of FUS sonication for drug delivery in various CNS disease states in the near future.

  10. Effects of climatological parameters in modeling and forecasting seasonal influenza transmission in Abidjan, Cote d'Ivoire.

    PubMed

    N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I

    2016-09-13

    In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.

  11. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  12. Experimental analysis of thread movement in bolted connections due to vibrations

    NASA Technical Reports Server (NTRS)

    Ramey, G. ED; Jenkins, Robert C.

    1994-01-01

    The objective of this study was to identify the main design parameters contributing to loosening of bolts due to vibration and to identify their relative importance and degree of contribution to bolt loosening. Vibration testing was conducted on a shaketable with a controlled-random input in the dynamic testing laboratory of the Structural Test Division of MSFC. Test specimens which contained one test bolt were vibrated for a fixed amount of time and percentage of pre-load loss was measured. Each specimen tested implemented some combination of eleven design parameters as dictated by the design of experiment methodology employed. The eleven design parameters were: bolt size (diameter), lubrication on bolt, hole tolerance, initial pre-load, nut locking device, grip length, thread pitch, lubrication between mating materials, class of fit, joint configuration and mass of configuration. These parameters were chosen for this experiment because they are believed to be the design parameters having the greatest impact on bolt loosening. Two values of each design parameter were used and each combination of parameters tested was subjected to two different directions of vibration and two different g-levels of vibration. One replication was made for each test to gain some indication of experimental error and repeatability and to give some degree of statistical credibility to the data, resulting in a total of 96 tests being performed. The results of the investigation indicated that nut locking devices, joint configuration, fastener size, and mass of configuration were significant in bolt loosening due to vibration. The results of this test can be utilized to further research the complex problem of bolt loosening due to vibration.

  13. A method to investigate the diffusion properties of nuclear calcium.

    PubMed

    Queisser, Gillian; Wittum, Gabriel

    2011-10-01

    Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.

  14. Complex bifurcation patterns in a discrete predator-prey model with periodic environmental modulation

    NASA Astrophysics Data System (ADS)

    Harikrishnan, K. P.

    2018-02-01

    We consider the simplest model in the family of discrete predator-prey system and introduce for the first time an environmental factor in the evolution of the system by periodically modulating the natural death rate of the predator. We show that with the introduction of environmental modulation, the bifurcation structure becomes much more complex with bubble structure and inverse period doubling bifurcation. The model also displays the peculiar phenomenon of coexistence of multiple limit cycles in the domain of attraction for a given parameter value that combine and finally gets transformed into a single strange attractor as the control parameter is increased. To identify the chaotic regime in the parameter plane of the model, we apply the recently proposed scheme based on the correlation dimension analysis. We show that the environmental modulation is more favourable for the stable coexistence of the predator and the prey as the regions of fixed point and limit cycle in the parameter plane increase at the expense of chaotic domain.

  15. Aerodynamic Parameter Estimation for the X-43A (Hyper-X) from Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Derry, Stephen D.; Smith, Mark S.

    2005-01-01

    Aerodynamic parameters were estimated based on flight data from the third flight of the X-43A hypersonic research vehicle, also called Hyper-X. Maneuvers were flown using multiple orthogonal phase-optimized sweep inputs applied as simultaneous control surface perturbations at Mach 8, 7, 6, 5, 4, and 3 during the vehicle descent. Aerodynamic parameters, consisting of non-dimensional longitudinal and lateral stability and control derivatives, were estimated from flight data at each Mach number. Multi-step inputs at nearly the same flight conditions were also flown to assess the prediction capability of the identified models. Prediction errors were found to be comparable in magnitude to the modeling errors, which indicates accurate modeling. Aerodynamic parameter estimates were plotted as a function of Mach number, and compared with estimates from the pre-flight aerodynamic database, which was based on wind-tunnel tests and computational fluid dynamics. Agreement between flight estimates and values computed from the aerodynamic database was excellent overall.

  16. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  17. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  18. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  19. Measurements of Deposition, Lung Surface Area and Lung Fluid for Simulation of Inhaled Compounds.

    PubMed

    Fröhlich, Eleonore; Mercuri, Annalisa; Wu, Shengqian; Salar-Behzadi, Sharareh

    2016-01-01

    Modern strategies in drug development employ in silico techniques in the design of compounds as well as estimations of pharmacokinetics, pharmacodynamics and toxicity parameters. The quality of the results depends on software algorithm, data library and input data. Compared to simulations of absorption, distribution, metabolism, excretion, and toxicity of oral drug compounds, relatively few studies report predictions of pharmacokinetics and pharmacodynamics of inhaled substances. For calculation of the drug concentration at the absorption site, the pulmonary epithelium, physiological parameters such as lung surface and distribution volume (lung lining fluid) have to be known. These parameters can only be determined by invasive techniques and by postmortem studies. Very different values have been reported in the literature. This review addresses the state of software programs for simulation of orally inhaled substances and focuses on problems in the determination of particle deposition, lung surface and of lung lining fluid. The different surface areas for deposition and for drug absorption are difficult to include directly into the simulations. As drug levels are influenced by multiple parameters the role of single parameters in the simulations cannot be identified easily.

  20. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  1. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  2. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  3. Mechanics of the taper integrated screwed-in (TIS) abutments used in dental implants.

    PubMed

    Bozkaya, Dinçer; Müftü, Sinan

    2005-01-01

    The tapered implant-abutment interface is becoming more popular due to the mechanical reliability of retention it provides. Consequently, understanding the mechanical properties of the tapered interface with or without a screw at the bottom has been the subject of a considerable amount of studies involving experiments and finite element (FE) analysis. This paper focuses on the tapered implant-abutment interface with a screw integrated at the bottom of the abutment. The tightening and loosening torques are the main factors in determining the reliability and the stability of the attachment. Analytical formulas are developed to predict tightening and loosening torque values by combining the equations related to the tapered interface with screw mechanics equations. This enables the identification of the effects of the parameters such as friction, geometric properties of the screw, the taper angle, and the elastic properties of the materials on the mechanics of the system. In particular, a relation between the tightening torque and the screw pretension is identified. It was shown that the loosening torque is smaller than the tightening torque for typical values of the parameters. Most of the tightening load is carried by the tapered section of the abutment, and in certain combinations of the parameters the pretension in the screw may become zero. The calculations performed to determine the loosening torque as a percentage of tightening torque resulted in the range 85-137%, depending on the values of taper angle and the friction coefficient.

  4. Effect of solute interactions in columbium /Nb/ on creep strength

    NASA Technical Reports Server (NTRS)

    Klein, M. J.; Metcalfe, A. G.

    1973-01-01

    The creep strength of 17 ternary columbium (Nb)-base alloys was determined using an abbreviated measuring technique, and the results were analyzed to identify the contributions of solute interactions to creep strength. Isostrength creep diagrams and an interaction strengthening parameter, ST, were used to present and analyze data. It was shown that the isostrength creep diagram can be used to estimate the creep strength of untested alloys and to identify compositions with the most economical use of alloy elements. Positive values of ST were found for most alloys, showing that interaction strengthening makes an important contribution to the creep strength of these ternary alloys.

  5. Optimisation of flame parameters for simultaneous multi-element atomic absorption spectrometric determination of trace elements in rocks

    USGS Publications Warehouse

    Kane, J.S.

    1988-01-01

    A study is described that identifies the optimum operating conditions for the accurate determination of Co, Cu, Mn, Ni, Pb, Zn, Ag, Bi and Cd using simultaneous multi-element atomic absorption spectrometry. Accuracy was measured in terms of the percentage recoveries of the analytes based on certified values in nine standard reference materials. In addition to identifying optimum operating conditions for accurate analysis, conditions resulting in serious matrix interferences and the magnitude of the interferences were determined. The listed elements can be measured with acceptable accuracy in a lean to stoicheiometric flame at measurement heights ???5-10 mm above the burner.

  6. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  7. Automatic detection of malaria parasite in blood images using two parameters.

    PubMed

    Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong

    2015-01-01

    Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.

  8. Evaluation of locally established reference intervals for hematology and biochemistry parameters in Western Kenya.

    PubMed

    Odhiambo, Collins; Oyaro, Boaz; Odipo, Richard; Otieno, Fredrick; Alemnji, George; Williamson, John; Zeh, Clement

    2015-01-01

    Important differences have been demonstrated in laboratory parameters from healthy persons in different geographical regions and populations, mostly driven by a combination of genetic, demographic, nutritional, and environmental factors. Despite this, European and North American derived laboratory reference intervals are used in African countries for patient management, clinical trial eligibility, and toxicity determination; which can result in misclassification of healthy persons as having laboratory abnormalities. An observational prospective cohort study known as the Kisumu Incidence Cohort Study (KICoS) was conducted to estimate the incidence of HIV seroconversion and identify determinants of successful recruitment and retention in preparation for an HIV vaccine/prevention trial among young adults and adolescents in western Kenya. Laboratory values generated from the KICoS were compared to published region-specific reference intervals and the 2004 NIH DAIDS toxicity tables used for the trial. About 1106 participants were screened for the KICoS between January 2007 and June 2010. Nine hundred and fifty-three participants aged 16 to 34 years, HIV-seronegative, clinically healthy, and non-pregnant were selected for this analysis. Median and 95% reference intervals were calculated for hematological and biochemistry parameters. When compared with both published region-specific reference values and the 2004 NIH DAIDS toxicity table, it was shown that the use of locally established reference intervals would have resulted in fewer participants classified as having abnormal hematological or biochemistry values compared to US derived reference intervals from DAIDS (10% classified as abnormal by local parameters vs. >40% by US DAIDS). Blood urea nitrogen was most often out of range if US based intervals were used: <10% abnormal by local intervals compared to >83% by US based reference intervals. Differences in reference intervals for hematological and biochemical parameters between western and African populations highlight importance of developing local reference intervals for clinical care and trials in Africa.

  9. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  10. A factorial assessment of the sensitivity of the BATS land-surface parameterization scheme. [BATS (Biosphere-Atmosphere Transfer Scheme)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson-Sellers, A.

    Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less

  11. Cesarean section scar diverticulum evaluation by saline contrast-enhanced magnetic resonance imaging: The relationship between variable parameters and longer menstrual bleeding.

    PubMed

    Yao, Min; Wang, Wenjing; Zhou, Jieru; Sun, Minghua; Zhu, Jialiang; Chen, Pin; Wang, Xipeng

    2017-04-01

    This study was conducted to determine a more accurate imaging method for the diagnosis of cesarean scar diverticulum (CSD) and to identify the parameters of CSD strongly associated with prolonged menstrual bleeding. We enrolled 282 women with a history of cesarean section (CS) who presented with prolonged menstrual bleeding between January 2012 and May 2015. Transvaginal ultrasound, general magnetic resonance imaging (MRI) and contrast-enhanced MRI were used to diagnose CSD. Five parameters were compared among the imaging modalities: length, width, depth and thickness of the remaining muscular layer (TRM) of CSD and the depth/TRM ratio. Correlation between the five parameters and days of menstrual bleeding was performed. Finally, multivariate analysis was used to determine the parameters associated with menstrual bleeding longer than 14 days. Contrast-enhanced MRI yielded greater length or width or thinner TRM of CSD compared with MRI and transvaginal ultrasound. CSD size did not significantly differ between women who had undergone one and two CSs. Correlation analysis revealed that CSD (P = 0.038) and TRM (P = 0.003) lengths were significantly associated with days of menstrual bleeding. Longer than 14 days of bleeding was defined by cut-off values of 2.15 mm for TRM and 13.85 mm for length. TRM and number of CSs were strongly associated with menstrual bleeding longer than 14 days. CE-MRI is a relatively accurate and efficient imaging method for the diagnosis of CSD. A cut-off value of TRM of 2.15 mm is the most important parameter associated with menstrual bleeding longer than 14 days. © 2017 Japan Society of Obstetrics and Gynecology.

  12. Risk of ultrasound-detected neonatal brain abnormalities in intrauterine growth-restricted fetuses born between 28 and 34 weeks' gestation: relationship with gestational age at birth and fetal Doppler parameters.

    PubMed

    Cruz-Martinez, R; Tenorio, V; Padilla, N; Crispi, F; Figueras, F; Gratacos, E

    2015-10-01

    To estimate the value of gestational age at birth and fetal Doppler parameters in predicting the risk of neonatal cranial abnormalities in intrauterine growth-restricted (IUGR) fetuses born between 28 and 34 weeks' gestation. Fetal Doppler parameters including umbilical artery (UA), middle cerebral artery (MCA), aortic isthmus, ductus venosus and myocardial performance index were evaluated in a cohort of 90 IUGR fetuses with abnormal UA Doppler delivered between 28 and 34 weeks' gestation and in 90 control fetuses matched for gestational age. The value of gestational age at birth and fetal Doppler parameters in predicting the risk of ultrasound-detected cranial abnormalities (CUA), including intraventricular hemorrhage, periventricular leukomalacia and basal ganglia lesions, was analyzed. Overall, IUGR fetuses showed a significantly higher incidence of CUA than did control fetuses (40.0% vs 12.2%, respectively; P < 0.001). Within the IUGR group, all predictive variables were associated individually with the risk of CUA, but fetal Doppler parameters rather than gestational age at birth were identified as the best predictor. MCA Doppler distinguished two groups with different degrees of risk of CUA (48.5% vs 13.6%, respectively; P < 0.01). In the subgroup with MCA vasodilation, presence of aortic isthmus retrograde net blood flow, compared to antegrade flow, allowed identification of a subgroup of cases with the highest risk of CUA (66.7% vs 38.6%, respectively; P < 0.05). Evaluation of fetal Doppler parameters, rather than gestational age at birth, allows identification of IUGR preterm fetuses at risk of neonatal brain abnormalities. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  13. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  14. Evaluating focused ion beam patterning for position-controlled nanowire growth using computer vision

    NASA Astrophysics Data System (ADS)

    Mosberg, A. B.; Myklebost, S.; Ren, D.; Weman, H.; Fimland, B. O.; van Helvoort, A. T. J.

    2017-09-01

    To efficiently evaluate the novel approach of focused ion beam (FIB) direct patterning of substrates for nanowire growth, a reference matrix of hole arrays has been used to study the effect of ion fluence and hole diameter on nanowire growth. Self-catalyzed GaAsSb nanowires were grown using molecular beam epitaxy and studied by scanning electron microscopy (SEM). To ensure an objective analysis, SEM images were analyzed with computer vision to automatically identify nanowires and characterize each array. It is shown that FIB milling parameters can be used to control the nanowire growth. Lower ion fluence and smaller diameter holes result in a higher yield (up to 83%) of single vertical nanowires, while higher fluence and hole diameter exhibit a regime of multiple nanowires. The catalyst size distribution and placement uniformity of vertical nanowires is best for low-value parameter combinations, indicating how to improve the FIB parameters for positioned-controlled nanowire growth.

  15. Origin of Disagreements in Tandem Mass Spectra Interpretation by Search Engines.

    PubMed

    Tessier, Dominique; Lollier, Virginie; Larré, Colette; Rogniaux, Hélène

    2016-10-07

    Several proteomic database search engines that interpret LC-MS/MS data do not identify the same set of peptides. These disagreements occur even when the scores of the peptide-to-spectrum matches suggest good confidence in the interpretation. Our study shows that these disagreements observed for the interpretations of a given spectrum are almost exclusively due to the variation of what we call the "peptide space", i.e., the set of peptides that are actually compared to the experimental spectra. We discuss the potential difficulties of precisely defining the "peptide space." Indeed, although several parameters that are generally reported in publications can easily be set to the same values, many additional parameters-with much less straightforward user access-might impact the "peptide space" used by each program. Moreover, in a configuration where each search engine identifies the same candidates for each spectrum, the inference of the proteins may remain quite different depending on the false discovery rate selected.

  16. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  17. Global sensitivity analysis of a local water balance model predicting evaporation, water yield and drought

    NASA Astrophysics Data System (ADS)

    Speich, Matthias; Zappa, Massimiliano; Lischke, Heike

    2017-04-01

    Evaporation and transpiration affect both catchment water yield and the growing conditions for vegetation. They are driven by climate, but also depend on vegetation, soil and land surface properties. In hydrological and land surface models, these properties may be included as constant parameters, or as state variables. Often, little is known about the effect of these variables on model outputs. In the present study, the effect of surface properties on evaporation was assessed in a global sensitivity analysis. To this effect, we developed a simple local water balance model combining state-of-the-art process formulations for evaporation, transpiration and soil water balance. The model is vertically one-dimensional, and the relative simplicity of its process formulations makes it suitable for integration in a spatially distributed model at regional scale. The main model outputs are annual total evaporation (TE, i.e. the sum of transpiration, soil evaporation and interception), and a drought index (DI), which is based on the ratio of actual and potential transpiration. This index represents the growing conditions for forest trees. The sensitivity analysis was conducted in two steps. First, a screening analysis was applied to identify unimportant parameters out of an initial set of 19 parameters. In a second step, a statistical meta-model was applied to a sample of 800 model runs, in which the values of the important parameters were varied. Parameter effect and interactions were analyzed with effects plots. The model was driven with forcing data from ten meteorological stations in Switzerland, representing a wide range of precipitation regimes across a strong temperature gradient. Of the 19 original parameters, eight were identified as important in the screening analysis. Both steps highlighted the importance of Plant Available Water Capacity (AWC) and Leaf Area Index (LAI). However, their effect varies greatly across stations. For example, while a transition from a sparse to a closed forest canopy has almost no effect on annual TE at warm and dry sites, it increases TE by up to 100 mm/year at cold-humid and warm-humid sites. Further parameters of importance describe infiltration, as well as canopy resistance and its response to environmental variables. This study offers insights for future development of hydrological and ecohydrological models. First, it shows that although local water balance is primarily controlled by climate, the vegetation and soil parameters may have a large impact on the outputs. Second, it indicates that modeling studies should prioritize a realistic parameterization of LAI and AWC, while other parameters may be set to fixed values. Third, it illustrates to which extent parameter effect and interactions depend on local climate.

  18. Relationships and redundancies of selected hemodynamic and structural parameters for characterizing virtual treatment of cerebral aneurysms with flow diverter devices.

    PubMed

    Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q

    2016-07-26

    To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Complete set of homogeneous isotropic analytic solutions in scalar-tensor cosmology with radiation and curvature

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-10-01

    We study a model of a scalar field minimally coupled to gravity, with a specific potential energy for the scalar field, and include curvature and radiation as two additional parameters. Our goal is to obtain analytically the complete set of configurations of a homogeneous and isotropic universe as a function of time. This leads to a geodesically complete description of the Universe, including the passage through the cosmological singularities, at the classical level. We give all the solutions analytically without any restrictions on the parameter space of the model or initial values of the fields. We find that for generic solutions the Universe goes through a singular (zero-size) bounce by entering a period of antigravity at each big crunch and exiting from it at the following big bang. This happens cyclically again and again without violating the null-energy condition. There is a special subset of geodesically complete nongeneric solutions which perform zero-size bounces without ever entering the antigravity regime in all cycles. For these, initial values of the fields are synchronized and quantized but the parameters of the model are not restricted. There is also a subset of spatial curvature-induced solutions that have finite-size bounces in the gravity regime and never enter the antigravity phase. These exist only within a small continuous domain of parameter space without fine-tuning the initial conditions. To obtain these results, we identified 25 regions of a 6-parameter space in which the complete set of analytic solutions are explicitly obtained.

  20. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    PubMed

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

Top