Sample records for ii parameter estimation

  1. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  2. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. A longitudinal study of serum ferritin in 319 adolescent Danish boys and girls examined in 1986 and 1992.

    PubMed

    Milman, N; Byg, K E; Backer, V; Ulrik, C; Graudal, N

    1999-10-01

    This study examined trends in iron status in adolescents. Serum ferritin was measured in 1986 and 1992 in 319 Danes (161 males) stratified into 5 groups: I. median age 9 yr in 1986 vs. 15 yr in 1992; II. 11 vs. 17 yr; III. 13 vs. 19 yr; IV. 15 vs. 21 yr; V. 17 vs. 23 yr. Males in group I demonstrated no change in ferritin or estimated iron stores in mg/kg; groups II-V displayed an increase in iron status parameters. All groups showed an increase in estimated total iron stores. Changes in iron status parameters were inversely correlated with height velocity in group III, and positively correlated with height velocity in group V. Females in age groups I and II demonstrated a fall in ferritin and estimated iron stores in mg/kg in association with menarche; values were unchanged in groups III and IV, and increased in group V. All groups showed an increase in estimated total iron stores. Changes in iron status parameters were inversely correlated with height velocity in groups I and II. In conclusion, ferritin levels in adolescents display great variation during growth spurt and at menarche. Changes in ferritin showed no consistent association with growth velocity. In both genders, estimated total iron stores increased with age.

  4. Confocal arthroscopy-based patient-specific constitutive models of cartilaginous tissues - II: prediction of reaction force history of meniscal cartilage specimens.

    PubMed

    Taylor, Zeike A; Kirk, Thomas B; Miller, Karol

    2007-10-01

    The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.

  5. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  6. Evaluation of design flood estimates with respect to sample size

    NASA Astrophysics Data System (ADS)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  7. The circuit parameters measurement of the SABALAN-I plasma focus facility and comparison with Lee Model

    NASA Astrophysics Data System (ADS)

    Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.

    The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.

  8. Assessment of type II diabetes mellitus using irregularly sampled measurements with missing data.

    PubMed

    Barazandegan, Melissa; Ekram, Fatemeh; Kwok, Ezra; Gopaluni, Bhushan; Tulsyan, Aditya

    2015-04-01

    Diabetes mellitus is one of the leading diseases in the developed world. In order to better regulate blood glucose in a diabetic patient, improved modelling of insulin-glucose dynamics is a key factor in the treatment of diabetes mellitus. In the current work, the insulin-glucose dynamics in type II diabetes mellitus can be modelled by using a stochastic nonlinear state-space model. Estimating the parameters of such a model is difficult as only a few blood glucose and insulin measurements per day are available in a non-clinical setting. Therefore, developing a predictive model of the blood glucose of a person with type II diabetes mellitus is important when the glucose and insulin concentrations are only available at irregular intervals. To overcome these difficulties, we resort to online sequential Monte Carlo (SMC) estimation of states and parameters of the state-space model for type II diabetic patients under various levels of randomly missing clinical data. Our results show that this method is efficient in monitoring and estimating the dynamics of the peripheral glucose, insulin and incretins concentration when 10, 25 and 50% of the simulated clinical data were randomly removed.

  9. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  10. The Radio-Loud Narrow-Line Quasar SDSS J172206.03+565451.6

    NASA Astrophysics Data System (ADS)

    Komossa, Stefanie; Voges, Wolfgang; Adorf, Hans-Martin; Xu, Dawei; Mathur, Smita; Anderson, Scott F.

    2006-03-01

    We report identification of the radio-loud narrow-line quasar SDSS J172206.03+565451.6, which we found in the course of a search for radio-loud narrow-line active galactic nuclei (AGNs). SDSS J172206.03+565451.6 is only about the fourth securely identified radio-loud narrow-line quasar and the second-most radio loud, with a radio index R1.4~100-700. Its black hole mass, MBH~=(2-3)×107 Msolar estimated from Hβ line width and 5100 Å luminosity, is unusually small given its radio loudness, and the combination of mass and radio index puts SDSS J172206.03+565451.6 in a scarcely populated region of MBH-R diagrams. SDSS J172206.03+565451.6 is a classical narrow-line Seyfert 1-type object with FWHMHβ~=1490 km s-1, an intensity ratio of [O III]/Hβ~=0.7, and Fe II emission complexes with Fe II λ4570/Hβ~=0.7. The ionization parameter of its narrow-line region, estimated from the line ratio [O II]/[O III], is similar to Seyferts, and its high ratio of [Ne V]/[Ne III] indicates a strong EUV-to-soft X-ray excess. We advertise the combined usage of [O II]/[O III] and [Ne V]/[Ne III] diagrams as a useful diagnostic tool to estimate ionization parameters and to constrain the EUV-soft X-ray continuum shape relatively independently from other parameters.

  11. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  12. Effect of substituents on prediction of TLC retention of tetra-dentate Schiff bases and their Copper(II) and Nickel(II) complexes.

    PubMed

    Stevanović, Nikola R; Perušković, Danica S; Gašić, Uroš M; Antunović, Vesna R; Lolić, Aleksandar Đ; Baošić, Rada M

    2017-03-01

    The objectives of this study were to gain insights into structure-retention relationships and to propose the model to estimating their retention. Chromatographic investigation of series of 36 Schiff bases and their copper(II) and nickel(II) complexes was performed under both normal- and reverse-phase conditions. Chemical structures of the compounds were characterized by molecular descriptors which are calculated from the structure and related to the chromatographic retention parameters by multiple linear regression analysis. Effects of chelation on retention parameters of investigated compounds, under normal- and reverse-phase chromatographic conditions, were analyzed by principal component analysis, quantitative structure-retention relationship and quantitative structure-activity relationship models were developed on the basis of theoretical molecular descriptors, calculated exclusively from molecular structure, and parameters of retention and lipophilicity. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.

    2002-12-01

    Although the spatial distribution of geochemical parameters is extremely important for many subsurface remediation approaches, traditional characterization of those parameters is invasive and laborious, and thus is rarely performed sufficiently to describe natural hydrogeological variability at the field-scale. This study is an effort to jointly use multiple sources of information, including noninvasive geophysical data, for geochemical characterization of the saturated and anaerobic portion of the DOE South Oyster Bacterial Transport Site in Virginia. Our data set includes hydrogeological and geochemical measurements from five boreholes and ground-penetrating radar (GPR) and seismic tomographic data along two profiles that traverse the boreholes. The primary geochemical parameters are the concentrations of extractable ferrous iron Fe(II) and ferric iron Fe(III). Since iron-reducing bacteria can reduce Fe(III) to Fe(II) under certain conditions, information about the spatial distributions of Fe(II) and Fe(III) may indicate both where microbial iron reduction has occurred and in which zone it is likely to occur in the future. In addition, as geochemical heterogeneity influences bacterial transport and activity, estimates of the geochemical parameters provide important input to numerical flow and contaminant transport models geared toward bioremediation. Motivated by our previous research, which demonstrated that crosshole geophysical data could be very useful for estimating hydrogeological parameters, we hypothesize in this study that geochemical and geophysical parameters may be linked through their mutual dependence on hydrogeological parameters such as lithofacies. We attempt to estimate geochemical parameters using both hydrogeological and geophysical measurements in a Bayesian framework. Within the two-dimensional study domain (12m x 6m vertical cross section divided into 0.25m x 0.25m pixels), geochemical and hydrogeological parameters were considered as data if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.

  14. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Use of algal fluorescence for determination of phytotoxicity of heavy metals and pesticides as environmental pollutants.

    PubMed

    Samson, G; Popovic, R

    1988-12-01

    The phytotoxicity of heavy metals and pesticides was studied by using the fluorescence induction from the alga Dunaliella tertiolecta. The complementary area calculated from the variable fluorescence induction was used as a direct parameter to estimate phytotoxicity. The value of this parameter was affected when algae were treated with different concentrations of mercury, copper, atrazine, DCMU, Dutox, and Soilgard. The toxic effect of these pollutants was estimated by monitoring the decrease in the complementary area, which reflects photosystem II photochemistry. Further, the authors have demonstrated the advantage of using the complementary area as a parameter of phytotoxicity over using variable fluorescence yield. The complementary area of algal fluorescence can be used as a simple and sensitive parameter in the estimation of the phytotoxicity of polluted water.

  16. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. 40 CFR 63.1414 - Test methods and emission estimation equations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...

  18. 40 CFR 63.1414 - Test methods and emission estimation equations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...

  19. 40 CFR 63.1414 - Test methods and emission estimation equations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...

  20. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  1. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  2. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  3. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  4. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  5. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  6. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  7. An MCMC determination of the primordial helium abundance

    NASA Astrophysics Data System (ADS)

    Aver, Erik; Olive, Keith A.; Skillman, Evan D.

    2012-04-01

    Spectroscopic observations of the chemical abundances in metal-poor H II regions provide an independent method for estimating the primordial helium abundance. H II regions are described by several physical parameters such as electron density, electron temperature, and reddening, in addition to y, the ratio of helium to hydrogen. It had been customary to estimate or determine self-consistently these parameters to calculate y. Frequentist analyses of the parameter space have been shown to be successful in these parameter determinations, and Markov Chain Monte Carlo (MCMC) techniques have proven to be very efficient in sampling this parameter space. Nevertheless, accurate determination of the primordial helium abundance from observations of H II regions is constrained by both systematic and statistical uncertainties. In an attempt to better reduce the latter, and continue to better characterize the former, we apply MCMC methods to the large dataset recently compiled by Izotov, Thuan, & Stasińska (2007). To improve the reliability of the determination, a high quality dataset is needed. In pursuit of this, a variety of cuts are explored. The efficacy of the He I λ4026 emission line as a constraint on the solutions is first examined, revealing the introduction of systematic bias through its absence. As a clear measure of the quality of the physical solution, a χ2 analysis proves instrumental in the selection of data compatible with the theoretical model. Nearly two-thirds of the observations fall outside a standard 95% confidence level cut, which highlights the care necessary in selecting systems and warrants further investigation into potential deficiencies of the model or data. In addition, the method also allows us to exclude systems for which parameter estimations are statistical outliers. As a result, the final selected dataset gains in reliability and exhibits improved consistency. Regression to zero metallicity yields Yp = 0.2534 ± 0.0083, in broad agreement with the WMAP result. The inclusion of more observations shows promise for further reducing the uncertainty, but more high quality spectra are required.

  8. Fundamental properties of Fanaroff-Riley type II radio galaxies investigated via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kapińska, A. D.; Uttley, P.; Kaiser, C. R.

    2012-08-01

    Radio galaxies and quasars are among the largest and most powerful single objects known and are believed to have had a significant impact on the evolving Universe and its large-scale structure. We explore the intrinsic and extrinsic properties of the population of Fanaroff-Riley type II (FR II) objects, i.e. their kinetic luminosities, lifetimes and the central densities of their environments. In particular, the radio and kinetic luminosity functions of these powerful radio sources are investigated using the complete, flux-limited radio catalogues of the Third Cambridge Revised Revised Catalogue (3CRR) and Best et al. We construct multidimensional Monte Carlo simulations using semi-analytical models of FR II source time evolution to create artificial samples of radio galaxies. Unlike previous studies, we compare radio luminosity functions found with both the observed and simulated data to explore the best-fitting fundamental source parameters. The new Monte Carlo method we present here allows us to (i) set better limits on the predicted fundamental parameters of which confidence intervals estimated over broad ranges are presented and (ii) generate the most plausible underlying parent populations of these radio sources. Moreover, as has not been done before, we allow the source physical properties (kinetic luminosities, lifetimes and central densities) to co-evolve with redshift, and we find that all the investigated parameters most likely undergo cosmological evolution. Strikingly, we find that the break in the kinetic luminosity function must undergo redshift evolution of at least (1 + z)3. The fundamental parameters are strongly degenerate, and independent constraints are necessary to draw more precise conclusions. We use the estimated kinetic luminosity functions to set constraints on the duty cycles of these powerful radio sources. A comparison of the duty cycles of powerful FR IIs with those determined from radiative luminosities of active galactic nuclei of comparable black hole mass suggests a transition in behaviour from high to low redshifts, corresponding to either a drop in the typical black hole mass of powerful FR IIs at low redshifts, or a transition to a kinetically dominated, radiatively inefficient FR II population.

  9. Mortality in Code Blue; can APACHE II and PRISM scores be used as markers for prognostication?

    PubMed

    Bakan, Nurten; Karaören, Gülşah; Tomruk, Şenay Göksu; Keskin Kayalar, Sinem

    2018-03-01

    Code blue (CB) is an emergency call system developed to respond to cardiac and respiratory arrest in hospitals. However, in literature, no scoring system has been reported that can predict mortality in CB procedures. In this study, we aimed to investigate the effectiveness of estimated APACHE II and PRISM scores in the prediction of mortality in patients assessed using CB to retrospectively analyze CB calls. We retrospectively examined 1195 patients who were evaluated by the CB team at our hospital between 2009 and 2013. The demographic data of the patients, diagnosis and relevant de-partments, reasons for CB, cardiopulmonary resuscitation duration, mortality calculated from the APACHE II and PRISM scores, and the actual mortality rates were retrospectively record-ed from CB notification forms and the hospital database. In all age groups, there was a significant difference between actual mortality rate and the expected mortality rate as estimated using APACHE II and PRISM scores in CB calls (p<0.05). The actual mortality rate was significantly lower than the expected mortality. APACHE and PRISM scores with the available parameters will not help predict mortality in CB procedures. Therefore, novels scoring systems using different parameters are needed.

  10. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  11. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  12. A novel procedure for detecting and focusing moving objects with SAR based on the Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Barbarossa, S.; Farina, A.

    A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.

  13. Ensemble-based simultaneous state and parameter estimation for treatment of mesoscale model error: A real-data study

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Ming; Zhang, Fuqing; Nielsen-Gammon, John W.

    2010-04-01

    This study explores the treatment of model error and uncertainties through simultaneous state and parameter estimation (SSPE) with an ensemble Kalman filter (EnKF) in the simulation of a 2006 air pollution event over the greater Houston area during the Second Texas Air Quality Study (TexAQS-II). Two parameters in the atmospheric boundary layer parameterization associated with large model sensitivities are combined with standard prognostic variables in an augmented state vector to be continuously updated through assimilation of wind profiler observations. It is found that forecasts of the atmosphere with EnKF/SSPE are markedly improved over experiments with no state and/or parameter estimation. More specifically, the EnKF/SSPE is shown to help alleviate a near-surface cold bias and to alter the momentum mixing in the boundary layer to produce more realistic wind profiles.

  14. Women use voice parameters to assess men's characteristics

    PubMed Central

    Bruckert, Laetitia; Liénard, Jean-Sylvain; Lacroix, André; Kreutzer, Michel; Leboucher, Gérard

    2005-01-01

    The purpose of this study was: (i) to provide additional evidence regarding the existence of human voice parameters, which could be reliable indicators of a speaker's physical characteristics and (ii) to examine the ability of listeners to judge voice pleasantness and a speaker's characteristics from speech samples. We recorded 26 men enunciating five vowels. Voices were played to 102 female judges who were asked to assess vocal attractiveness and speakers' age, height and weight. Statistical analyses were used to determine: (i) which physical component predicted which vocal component and (ii) which vocal component predicted which judgment. We found that men with low-frequency formants and small formant dispersion tended to be older, taller and tended to have a high level of testosterone. Female listeners were consistent in their pleasantness judgment and in their height, weight and age estimates. Pleasantness judgments were based mainly on intonation. Female listeners were able to correctly estimate age by using formant components. They were able to estimate weight but we could not explain which acoustic parameters they used. However, female listeners were not able to estimate height, possibly because they used intonation incorrectly. Our study confirms that in all mammal species examined thus far, including humans, formant components can provide a relatively accurate indication of a vocalizing individual's characteristics. Human listeners have the necessary information at their disposal; however, they do not necessarily use it. PMID:16519239

  15. Performance in population models for count data, part II: a new SAEM algorithm

    PubMed Central

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  16. Methane emission estimation from landfills in Korea (1978-2004): quantitative assessment of a new approach.

    PubMed

    Kim, Hyun-Sun; Yi, Seung-Muk

    2009-01-01

    Quantifying methane emission from landfills is important to evaluating measures for reduction of greenhouse gas (GHG) emissions. To quantify GHG emissions and identify sensitive parameters for their measurement, a new assessment approach consisting of six different scenarios was developed using Tier 1 (mass balance method) and Tier 2 (the first-order decay method) methodologies for GHG estimation from landfills, suggested by the Intergovernmental Panel on Climate Change (IPCC). Methane emissions using Tier 1 correspond to trends in disposed waste amount, whereas emissions from Tier 2 gradually increase as disposed waste decomposes over time. The results indicate that the amount of disposed waste and the decay rate for anaerobic decomposition were decisive parameters for emission estimation using Tier 1 and Tier 2. As for the different scenarios, methane emissions were highest under Scope 1 (scenarios I and II), in which all landfills in Korea were regarded as one landfill. Methane emissions under scenarios III, IV, and V, which separated the dissimilated fraction of degradable organic carbon (DOC(F)) by waste type and/or revised the methane correction factor (MCF) by waste layer, were underestimated compared with scenarios II and III. This indicates that the methodology of scenario I, which has been used in most previous studies, may lead to an overestimation of methane emissions. Additionally, separate DOC(F) and revised MCF were shown to be important parameters for methane emission estimation from landfills, and revised MCF by waste layer played an important role in emission variations. Therefore, more precise information on each landfill and careful determination of parameter values and characteristics of disposed waste in Korea should be used to accurately estimate methane emissions from landfills.

  17. An estimator for the standard deviation of a natural frequency. II.

    NASA Technical Reports Server (NTRS)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.

  18. Extreme longevity in freshwater mussels revisited: sources of bias in age estimates derived from mark-recapture experiments

    Treesearch

    Wendell R. Haag

    2009-01-01

    There may be bias associated with mark–recapture experiments used to estimate age and growth of freshwater mussels. Using subsets of a mark–recapture dataset for Quadrula pustulosa, I examined how age and growth parameter estimates are affected by (i) the range and skew of the data and (ii) growth reduction due to handling. I compared predictions...

  19. Estimation of forest biomass using remote sensing

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Latifur Rahman

    Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation

  20. [Diagnostic value of integral scoring systems in assessing the severity of acute pancreatitis and patient's condition].

    PubMed

    Vinnik, Y S; Dunaevskaya, S S; Antufrieva, D A

    2015-01-01

    The aim of the study was to evaluate the diagnostic value of specific and nonspecific scoring systems Tolstoy-Krasnogorov score, Ranson, BISAP, Glasgow, MODS 2, APACHE II and CTSI, which used at urgent pancreatology for estimation the severity of acute pancreatitis and status of patient. 1550 case reports of patients which had inpatient surgical treatment at Road clinical hospital at the station Krasnoyarsk from 2009 till 2013 were analyzed. Diagnosis of severe acute pancreatitis and its complications were determined based on anamnestic data, physical exami- nation, clinical indexes, ultrasonic examination and computed tomography angiography. Specific and nonspecific scores (scoring system of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow, BISAP, MODS 2, APACHE II, CTSI) were used for estimation the severity of acute pancreatitis and patient's general condition. Effectiveness of these scoring systems was determined based on some parameters: accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV). Most valuables score for estimation of acute pancreatitis's severity is BISAP (Se--98.10%), for estimation of organ failure--MODS 2 (Sp--100%, PPV--100%) and APACHE II (Sp--100%, PPV--100%), for detection of pancreatonecrosis sings--CTSI (Sp--100%, NPV--100%), for estimation of need for intensive care--MODS 2 (Sp--100%, PPV--100%, NPV--96.29%) and APACHE II (Sp--100%, PPV--100%, NPV--97.21%), for prediction of lethality--MODS 2 (Se-- 100%, Sp--98.14%, NPV--100%) and APACHE II (Se--95.00%, NPV-.99.86%). Most effective scores for estimation of acute pancreatitis's severity are Score of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow and BISAP Scoring systems MODS 2, APACHE I high specificity and positive predictive value allow using it at clinical practice.

  1. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  2. Cosmological evolution of supermassive black holes in the centres of galaxies

    NASA Astrophysics Data System (ADS)

    Kapinska, Anna D.

    2012-06-01

    Radio galaxies and quasars are among the largest and most powerful single objects known and are believed to have had a significant impact on the evolving Universe and its large scale structure. Their jets inject a significant amount of energy into the surrounding medium, hence they can provide useful information in the study of the density and evolution of the intergalactic and intracluster medium. The jet activity is also believed to regulate the growth of massive galaxies via the AGN feedback. In this thesis I explore the intrinsic and extrinsic physical properties of the population of Fanaroff-Riley II (FR II) objects, i.e. their kinetic luminosities, lifetimes, and central densities of their environments. In particular, the radio and kinetic luminosity functions of these powerful radio sources are investigated using the complete, flux limited radio catalogues of 3CRR and BRL. I construct multidimensional Monte Carlo simulations using semi-analytical models of FR II source time evolution to create artificial samples of radio galaxies. Unlike previous studies, I compare radio luminosity functions found with both the observed and simulated data to explore the best-fitting fundamental source parameters. The Monte Carlo method presented here allows one to: (i) set better limits on the predicted fundamental parameters of which confidence intervals estimated over broad ranges are presented, and (ii) generate the most plausible underlying parent populations of these radio sources. Moreover, I allow the source physical properties to co-evolve with redshift, and I find that all the investigated parameters most likely undergo cosmological evolution; however these parameters are strongly degenerate, and independent constraints are necessary to draw more precise conclusions. Furthermore, since it has been suggested that low luminosity FR IIs may be distinct from their powerful equivalents, I attempt to investigate fundamental properties of a sample of low redshift, low radio luminosity density radio galaxies. Based on SDSS-FIRST-NVSS radio sample I construct a low frequency (325 MHz) sample of radio galaxies and attempt to explore the fundamental properties of these low luminosity radio sources. The results are discussed through comparison with the results from the powerful radio sources of the 3CRR and BRL samples. Finally, I investigate the total power injected by populations of these powerful radio sources at various cosmological epochs and discuss the significance of the impact of these sources on the evolving Universe. Remarkably, sets of two degenerate fundamental parameters, the kinetic luminosity and maximum lifetimes of radio sources, despite the degeneracy provide particularly robust estimates of the total power produced by FR IIs during their lifetimes. This can be also used for robust estimations of the quenching of the cooling flows in cluster of galaxies.

  3. The logic of comparative life history studies for estimating key parameters, with a focus on natural mortality rate

    USGS Publications Warehouse

    Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.

    2016-01-01

    There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.

  4. Estimation of Salivary and Serum Biomarkers in Diabetic and Non Diabetic Patients - A Comparative Study

    PubMed Central

    Ladgotra, Amit; Raj, Seetharamaiah Sunder

    2016-01-01

    Introduction Blood is the gold standard body fluid for diagnosis of Diabetes Mellitus (DM) but saliva offers an alternative to serum as a biological fluid for diagnostic purposes because it contains serum constituents. Aim The study was conducted to estimate and compare serum and salivary glucose, amylase, proteins, calcium and phosphorus levels in DM and healthy subjects and to evaluate whether saliva can be used as a diagnostic fluid in DM patients. Materials and Methods Study consisted of 120 subjects from OPD of Surendera Dental College, Sriganganagar, Rajasthan, India. The study groups were divided into Group I-60 DM patients (Type I & II) and Group II-60 healthy subjects. The saliva and serum samples were collected from each subject and levels of different biochemical parameters were estimated. Results Mean serum level of glucose (211.50 ± 43.82), amylase (79.86 ± 16.23), total proteins (6.65 ± 0.84), calcium (7.17 ± 0.91) and phosphorus (3.68±0.65) as observed in Group I while in Group II, glucose (88.81±11.29), amylase (77.67±14.88), total proteins (6.35±0.76), calcium (7.52±0.97) and phosphorus (3.96 ± 0.91) were noted. Mean salivary level of glucose (14.10±6.99), amylase (1671.42±569.86), total proteins (1.33±1.11), calcium (10.06±2.76) and phosphorus (13.75±4.45) as observed in Group I while in Group II, glucose (5.87± 2.42), amylase (1397.59 ±415.97), total proteins (1.36±0.81), calcium (7.73±2.78) and phosphorus (8.39 ± 1.95) were noted. On comparing values in saliva and serum, among two groups, an insignificant difference (p>0.005) was found between few of them. Conclusion Values regarding blood and salivary biochemical parameters were distinctly different between two groups suggesting salivary parameters can be used as a diagnostic alternative to blood parameters for diabetes mellitus. PMID:27504412

  5. Market-Based Coordination of Thermostatically Controlled Loads—Part II: Unknown Parameters and Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less

  6. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, Part-II: Models And Procedures (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.

  7. Determination of rainfall losses in Virginia, phase II : final report.

    DOT National Transportation Integrated Search

    1982-01-01

    A procedure is presented by which regional unit hydrograph and loss rate parameters are estimated for the generation of design storm hydrographs for watershed in Virginia. The state is divided into seven hydrological regions, and unit hydrograph and ...

  8. Removal of Pb(II) and Cd(II) from water by adsorption on peels of banana.

    PubMed

    Anwar, Jamil; Shafique, Umer; Waheed-uz-Zaman; Salman, Muhammad; Dar, Amara; Anwar, Shafique

    2010-03-01

    The adsorption of lead(II) and cadmium(II) on peels of banana has been studied in batch mode using flame atomic absorption spectroscopy for metal estimation. Concerned parameters like adsorbent dose, pH, contact time and agitation speed were investigated. Langmuir, Freundlich and Temkin isotherms were employed to describe adsorption equilibrium. The maximum amounts of cadmium(II) and lead(II) adsorbed (qm), as evaluated by Langmuir isotherm, were 5.71 mg and 2.18 mg per gram of powder of banana peels, respectively. Study concluded that banana peels, a waste material, have good potential as an adsorbent to remove toxic metals like lead and cadmium from water. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  9. Atmospheric seeing measurements obtained with MISOLFA in the framework of the PICARD Mission

    NASA Astrophysics Data System (ADS)

    Ikhlef, R.; Corbard, T.; Irbah, A.; Morand, F.; Fodil, M.; Chauvineau, B.; Assus, P.; Renaud, C.; Meftah, M.; Abbaki, S.; Borgnino, J.; Cissé, E. M.; D'Almeida, E.; Hauchecorne, A.; Laclare, F.; Lesueur, P.; Lin, M.; Martin, F.; Poiet, G.; Rouzé, M.; Thuillier, G.; Ziad, A.

    2012-09-01

    PICARD is a space mission launched in June 2010 to study mainly the geometry of the Sun. The PICARD mission has a ground program consisting mostly in four instruments based at the Calern Observatory (Observatoire de la Côte d’Azur). They allow recording simultaneous solar images and various atmospheric data from ground. The ground instruments consist in the qualification model of the PICARD space instrument (SODISM II: Solar Diameter Imager and Surface Mapper), standard sun-photometers, a pyranometer for estimating a global sky quality index, and MISOLFA a generalized daytime seeing monitor. Indeed, astrometric observations of the Sun using ground-based telescopes need an accurate modeling of optical effects induced by atmospheric turbulence. MISOLFA is founded on the observation of Angle-of-Arrival (AA) fluctuations and allows us to analyze atmospheric turbulence optical effects on measurements performed by SODISM II. It gives estimations of the coherence parameters characterizing wave-fronts degraded by the atmospheric turbulence (Fried parameter r0, size of the isoplanatic patch, the spatial coherence outer scale L0 and atmospheric correlation times). We present in this paper simulations showing how the Fried parameter infered from MISOLFA records can be used to interpret radius measurements extracted from SODISM II images. We show an example of daily and monthly evolution of r0 and present its statistics over 2 years at Calern Observatory with a global mean value of 3.5cm.

  10. Quantification of groundwater infiltration and surface water inflows in urban sewer networks based on a multiple model approach.

    PubMed

    Karpf, Christian; Krebs, Peter

    2011-05-01

    The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures

    PubMed Central

    Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.

    2016-01-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038

  12. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  13. Optimization and experimental validation of a thermal cycle that maximizes entropy coefficient fisher identifiability for lithium iron phosphate cells

    NASA Astrophysics Data System (ADS)

    Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam

    2016-03-01

    This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.

  14. State and parameter estimation of the heat shock response system using Kalman and particle filters.

    PubMed

    Liu, Xin; Niranjan, Mahesan

    2012-06-01

    Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock

  15. Estimates of atmospheric O2 in the Paleoproterozoic from paleosols

    NASA Astrophysics Data System (ADS)

    Kanzaki, Yoshiki; Murakami, Takashi

    2016-02-01

    A weathering model was developed to constrain the partial pressure of atmospheric O2 (PO2) in the Paleoproterozoic from the Fe records in paleosols. The model describes the Fe behavior in a weathering profile by dissolution/precipitation of Fe-bearing minerals, oxidation of dissolved Fe(II) to Fe(III) by oxygen and transport of dissolved Fe by water flow, in steady state. The model calculates the ratio of the precipitated Fe(III)-(oxyhydr)oxides from the dissolved Fe(II) to the dissolved Fe(II) during weathering (ϕ), as a function of PO2 . An advanced kinetic expression for Fe(II) oxidation by O2 was introduced into the model from the literature to calculate accurate ϕ-PO2 relationships. The model's validity is supported by the consistency of the calculated ϕ-PO2 relationships with those in the literature. The model can calculate PO2 for a given paleosol, once a ϕ value and values of the other parameters relevant to weathering, namely, pH of porewater, partial pressure of carbon dioxide (PCO2), water flow, temperature and O2 diffusion into soil, are obtained for the paleosol. The above weathering-relevant parameters were scrutinized for individual Paleoproterozoic paleosols. The values of ϕ, temperature, pH and PCO2 were obtained from the literature on the Paleoproterozoic paleosols. The parameter value of water flow was constrained for each paleosol from the mass balance of Si between water and rock phases and the relationships between water saturation ratio and hydraulic conductivity. The parameter value of O2 diffusion into soil was calculated for each paleosol based on the equation for soil O2 concentration with the O2 transport parameters in the literature. Then, we conducted comprehensive PO2 calculations for individual Paleoproterozoic paleosols which reflect all uncertainties in the weathering-relevant parameters. Consequently, robust estimates of PO2 in the Paleoproterozoic were obtained: 10-7.1-10-5.4 atm at ∼2.46 Ga, 10-5.0-10-2.5 atm at ∼2.15 Ga, 10-5.2-10-1.7 atm at ∼2.08 Ga and more than 10-4.6-10-2.0 atm at ∼1.85 Ga. Comparison of the present PO2 estimates to those in the literature suggests that a drastic rise of oxygen would not have occurred at ∼2.4 Ga, supporting a slightly rapid rise of oxygen at ∼2.4 Ga and a gradual rise of oxygen in the Paleoproterozoic in long term.

  16. New selection effect in statistical investigations of supernova remnants

    NASA Astrophysics Data System (ADS)

    Allakhverdiev, A. O.; Guseinov, O. Kh.; Kasumov, F. K.

    1986-01-01

    The influence of H II regions on the parameters of supernova remnants (SNR) is investigated. It has been shown that the projection of such regions on the SNRs leads to: a) local changes of morphological structure of young shell-type SNRs and b) considerable distortions of integral parameters of evolved shell-type SNRs (with D > 10 pc) and plerions, up to their complete undetectability on the background of classical and gigantic H II regions. A new selection effect, in fact, arises from these factors connected with additional limitations made by the real structure of the interstellar medium on the statistical investigations of SNRs. The influence of this effect on the statistical completeness of objects has been estimated.

  17. Multivariate meta-analysis with an increasing number of parameters

    PubMed Central

    Boca, Simina M.; Pfeiffer, Ruth M.; Sampson, Joshua N.

    2017-01-01

    Summary Meta-analysis can average estimates of multiple parameters, such as a treatment’s effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between study variability, the loss of efficiency due to choosing random effects MVMA over fixed-effect MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for Non-Hodgkin Lymphoma. PMID:28195655

  18. Posterior uncertainty of GEOS-5 L-band radiative transfer model parameters and brightness temperatures after calibration with SMOS observations

    NASA Astrophysics Data System (ADS)

    De Lannoy, G. J.; Reichle, R. H.; Vrugt, J. A.

    2012-12-01

    Simulated L-band (1.4 GHz) brightness temperatures are very sensitive to the values of the parameters in the radiative transfer model (RTM). We assess the optimum RTM parameter values and their (posterior) uncertainty in the Goddard Earth Observing System (GEOS-5) land surface model using observations of multi-angular brightness temperature over North America from the Soil Moisture Ocean Salinity (SMOS) mission. Two different parameter estimation methods are being compared: (i) a particle swarm optimization (PSO) approach, and (ii) an MCMC simulation procedure using the differential evolution adaptive Metropolis (DREAM) algorithm. Our results demonstrate that both methods provide similar "optimal" parameter values. Yet, DREAM exhibits better convergence properties, resulting in a reduced spread of the posterior ensemble. The posterior parameter distributions derived with both methods are used for predictive uncertainty estimation of brightness temperature. This presentation will highlight our model-data synthesis framework and summarize our initial findings.

  19. Electron requirements for carbon incorporation along a diel light cycle in three marine diatom species.

    PubMed

    Morelle, Jérôme; Claquin, Pascal

    2018-02-23

    Diatoms account for about 40% of primary production in highly productive ecosystems. The development of a new generation of fluorometers has made it possible to improve estimation of the electron transport rate from photosystem II, which, when coupled with the carbon incorporation rate enables estimation of the electrons required for carbon fixation. The aim of this study was to investigate the daily dynamics of these electron requirements as a function of the diel light cycle in three relevant diatom species and to apprehend if the method of estimating the electron transport rate can lead to different pictures of the dynamics. The results confirmed the species-dependent capacity for photoacclimation under increasing light levels. Despite daily variations in the photosynthetic parameters, the results of this study underline the low daily variability of the electron requirements estimated using functional absorption of the photosystem II compared to an estimation based on a specific absorption cross section of chlorophyll a. The stability of the electron requirements throughout the day would suggest it is potentially possible to estimate high-frequency primary production by using autonomous variable fluorescence measurements from ships-of-opportunity or moorings, without taking potential daily variation in this parameter into consideration, but this result has to be confirmed on natural phytoplankton assemblages. The results obtained in this study confirm the low electron requirements of diatoms to perform photosynthesis, and suggest a potential additional source of energy for carbon fixation, as recently described in the literature for this class.

  20. Multiwavelength studies of H II region NGC 2467

    NASA Astrophysics Data System (ADS)

    Yadav, Ram Kesh; Pandey, A. K.; Sharma, Saurabh; Eswaraiah, C.

    2013-06-01

    We present the multiwavelength studies of the H II region Sh2-311 to explore the effects of massive stars on low-mass star formation. In this study we have used optical (UBVI) data from ESO 2.2m Wide Field Imager (WFI), Near-Infrared (NIR) (JHKs) data from CTIO 4m Blanco Telescope and archival Spitzer 8μm data. Based on stellar density contours and dust distribution we have divided the complex into three regions i.e., Haffner 19 (H19), Haffner 18 (H18) and NGC 2467. Using the UBVI data we have estimated the basic parameters of these regions. We have constructed the (J - H)/(H - Ks) color-color diagram and a J/(J - H) color-magnitude diagram to identify young stellar objects (YSOs) and to estimate their masses. Spatial distribution of the YSOs indicates that most of them are distributed at the periphery of the H II region and ionizing star may be responsible for the triggering of star formation at the periphery of the H II region.

  1. The Pressure Coefficients of the Superconducting Order Parameters at the Ground State of Ferromagnetic Superconductors

    NASA Astrophysics Data System (ADS)

    Konno, R.; Hatayama, N.; Chaudhury, R.

    2014-04-01

    We investigated the pressure coefficients of the superconducting order parameters at the ground state of ferromagnetic superconductors based on the microscopic single band model by Linder et al. The superconducting gaps (i) similar to the ones seen in the thin film of A2 phase in liquid 3He and (ii) with the line node were used. This study shows that we would be able to estimate the pressure coefficients of the superconducting and magnetic order parameters at the ground state of ferromagnetic superconductors.

  2. A MULTIWAVELENGTH STUDY OF STAR FORMATION IN THE VICINITY OF GALACTIC H II REGION Sh 2-100

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, M. R.; Pandey, A. K.; Sagar, R.

    We present multiwavelength investigation of morphology, physical-environment, stellar contents, and star formation activity in the vicinity of star-forming region Sh 2-100. It is found that the Sh 2-100 region contains seven H II regions of ultracompact and compact nature. The present estimation of distance for three H II regions, along with the kinematic distance for others, suggests that all of them belong to the same molecular cloud complex. Using near-infrared photometry, we identified the most probable ionizing sources of six H II regions. Their approximate photometric spectral type estimates suggest that they are massive early-B to mid-O zero-age-main-sequence stars andmore » agree well with radio continuum observations at 1280 MHz, for sources whose emissions are optically thin at this frequency. The morphology of the complex shows a non-uniform distribution of warm and hot dust, well mixed with the ionized gas, which correlates well with the variation of average visual extinction ({approx}4.2-97 mag) across the region. We estimated the physical parameters of ionized gas with the help of radio continuum observations. We detected an optically visible compact nebula located to the south of the 850 {mu}m emission associated with one of the H II regions and the diagnostic of the optical emission line ratios gives electron density and electron temperature of {approx}0.67 x 10{sup 3} cm{sup -3} and {approx}10{sup 4} K, respectively. The physical parameters suggest that all the H II regions are in different stages of evolution, which correlate well with the probable ages in the range {approx}0.01-2 Myr of the ionizing sources. The spatial distribution of infrared excess stars, selected from near-infrared and Infrared Array Camera color-color diagrams, correlates well with the association of gas and dust. The positions of infrared excess stars, ultracompact and compact H II regions at the periphery of an H I shell, possibly created by a WR star, indicate that star formation in Sh 2-100 region might have been induced by an expanding H I shell.« less

  3. Magnetic properties of type-I and type-II Weyl semimetals in the superconducting state

    NASA Astrophysics Data System (ADS)

    Rosenstein, Baruch; Shapiro, B. Ya.; Li, Dingping; Shapiro, I.

    2018-04-01

    Superconductivity was observed in certain range of pressure and chemical composition in Weyl semimetals of both type I and type II (when the Dirac cone tilt parameter κ >1 ). Magnetic properties of these superconductors are studied on the basis of microscopic phonon-mediated pairing model. The Ginzburg-Landau effective theory for the order parameter is derived using the Gorkov approach and used to determine anisotropic coherence length, the penetration depth determining the Abrikosov parameter for a layered material and applied to recent extensive experiments on MoTe2. It is found that superconductivity is of second kind near the topological transition at κ =1 . For a larger tilt parameter, superconductivity becomes first kind. For κ <1 , the Abrikosov parameter also tends to be reduced, often crossing over to the first kind. For the superconductors of the second kind, the dependence of critical fields Hc 2 and Hc 1 on the tilt parameter κ (governed by pressure) is compared with the experiments. Strength of thermal fluctuations is estimated and it is found that they are strong enough to cause Abrikosov vortex lattice melting near Hc 2. The melting line is calculated and is consistent with experiments provided the fluctuations are three dimensional in the type-I phase (large pressure) and two dimensional in the type-II phase (small pressure).

  4. CO OBSERVATIONS AND INVESTIGATION OF TRIGGERED STAR FORMATION TOWARD THE N10 INFRARED BUBBLE AND SURROUNDINGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gama, D. R. G.; Lepine, J. R. D.; Mendoza, E.

    We studied the environment of the dust bubble N10 in molecular emission. Infrared bubbles, first detected by the GLIMPSE survey at 8.0 μ m, are ideal regions to investigate the effect of the expansion of the H ii region on its surroundings and the eventual triggering of star formation at its borders. In this work, we present a multi-wavelength study of N10. This bubble is especially interesting because infrared studies of the young stellar content suggest a scenario of ongoing star formation, possibly triggered on the edge of the H ii region. We carried out observations of {sup 12}CO(1-0) andmore » {sup 13}CO(1-0) emission at PMO 13.7 m toward N10. We also analyzed the IR and sub-millimeter emission on this region and compare those different tracers to obtain a detailed view of the interaction between the expanding H ii region and the molecular gas. We also estimated the parameters of the denser cold dust condensation and the ionized gas inside the shell. Bright CO emission was detected and two molecular clumps were identified from which we have derived physical parameters. We also estimate the parameters for the densest cold dust condensation and for the ionized gas inside the shell. The comparison between the dynamical age of this region and the fragmentation timescale favors the “Radiation-Driven Implosion” mechanism of star formation. N10 is a case of particular interest with gas structures in a narrow frontier between the H ii region and surrounding molecular material, and with a range of ages of YSOs situated in the region, indicating triggered star formation.« less

  5. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    NASA Astrophysics Data System (ADS)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahrdt, J.; Frentrup, W.; Gaupp, A.

    BESSY plans to build a SASE-FEL facility for the energy range from 20 eV to 1000 eV. The energy range will be covered by three APPLE II type undulators with a magnetic length of about 60 m each. This paper summarizes the basic parameters of the FEL-undulators. The magnetic design will be presented. A modified APPLE II design will be discussed which provides higher fields at the expense of reduced horizontal access. GENESIS simulations give an estimate on the tolerances for the beam wander and for gap errors.

  7. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    NASA Astrophysics Data System (ADS)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.

  8. The effect of concentration- and temperature-dependent dielectric constant on the activity coefficient of NaCl electrolyte solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiskó, Mónika; Boda, Dezső, E-mail: boda@almos.vein.hu

    2014-06-21

    Our implicit-solvent model for the estimation of the excess chemical potential (or, equivalently, the activity coefficient) of electrolytes is based on using a dielectric constant that depends on the thermodynamic state, namely, the temperature and concentration of the electrolyte, ε(c, T). As a consequence, the excess chemical potential is split into two terms corresponding to ion-ion (II) and ion-water (IW) interactions. The II term is obtained from computer simulation using the Primitive Model of electrolytes, while the IW term is estimated from the Born treatment. In our previous work [J. Vincze, M. Valiskó, and D. Boda, “The nonmonotonic concentration dependencemore » of the mean activity coefficient of electrolytes is a result of a balance between solvation and ion-ion correlations,” J. Chem. Phys. 133, 154507 (2010)], we showed that the nonmonotonic concentration dependence of the activity coefficient can be reproduced qualitatively with this II+IW model without using any adjustable parameter. The Pauling radii were used in the calculation of the II term, while experimental solvation free energies were used in the calculation of the IW term. In this work, we analyze the effect of the parameters (dielectric constant, ionic radii, solvation free energy) on the concentration and temperature dependence of the mean activity coefficient of NaCl. We conclude that the II+IW model can explain the experimental behavior using a concentration-dependent dielectric constant and that we do not need the artificial concept of “solvated ionic radius” assumed by earlier studies.« less

  9. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  10. Demonstrations in Solute Transport Using Dyes: Part II. Modeling.

    ERIC Educational Resources Information Center

    Butters, Greg; Bandaranayake, Wije

    1993-01-01

    A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)

  11. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART II: EFFECTS OF MISSING DATA AND IMPRECISION

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...

  12. Type II supernovae in low luminosity host galaxies

    NASA Astrophysics Data System (ADS)

    Gutiérrez, C. P.; Anderson, J. P.; Sullivan, M.; Dessart, L.; González-Gaitan, S.; Galbany, L.; Dimitriadis, G.; Arcavi, I.; Bufano, F.; Chen, T.-W.; Dennefeld, M.; Gromadzki, M.; Haislip, J. B.; Hosseinzadeh, G.; Howell, D. A.; Inserra, C.; Kankare, E.; Leloudas, G.; Maguire, K.; McCully, C.; Morrell, N.; E, F. Olivares; Pignata, G.; Reichart, D. E.; Reynolds, T.; Smartt, S. J.; Sollerman, J.; Taddia, F.; Takáts, K.; Terreran, G.; Valenti, S.; Young, D. R.

    2018-06-01

    We present an analysis of a new sample of type II core-collapse supernovae (SNe II) occurring within low-luminosity galaxies, comparing these with a sample of events in brighter hosts. Our analysis is performed comparing SN II spectral and photometric parameters and estimating the influence of metallicity (inferred from host luminosity differences) on SN II transient properties. We measure the SN absolute magnitude at maximum, the light-curve plateau duration, the optically thick duration, and the plateau decline rate in the V -band, together with expansion velocities and pseudo-equivalent-widths (pEWs) of several absorption lines in the SN spectra. For the SN host galaxies, we estimate the absolute magnitude and the stellar mass, a proxy for the metallicity of the host galaxy. SNe II exploding in low luminosity galaxies display weaker pEWs of Fe II λ5018, confirming the theoretical prediction that metal lines in SN II spectra should correlate with metallicity. We also find that SNe II in low-luminosity hosts have generally slower declining light curves and display weaker absorption lines. We find no relationship between the plateau duration or the expansion velocities with SN environment, suggesting that the hydrogen envelope mass and the explosion energy are not correlated with the metallicity of the host galaxy. This result supports recent predictions that mass-loss for red supergiants is independent of metallicity.

  13. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  14. Multivariate meta-analysis with an increasing number of parameters.

    PubMed

    Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N

    2017-05-01

    Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma. © Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  15. ApoA-I/A-II-HDL positively associates with apoB-lipoproteins as a potential atherogenic indicator.

    PubMed

    Kido, Toshimi; Kondo, Kazuo; Kurata, Hideaki; Fujiwara, Yoko; Urata, Takeyoshi; Itakura, Hiroshige; Yokoyama, Shinji

    2017-11-29

    We recently reported distinct nature of high-density lipoproteins (HDL) subgroup particles with apolipoprotein (apo) A-I but not apoA-II (LpAI) and HDL having both (LpAI:AII) based on the data from 314 Japanese. While plasma HDL level almost exclusively depends on concentration of LpAI having 3 to 4 apoA-I molecules, LpAI:AII appeared with almost constant concentration regardless of plasma HDL levels having stable structure with two apoA-I and one disulfide-dimeric apoA-II molecules (Sci. Rep. 6; 31,532, 2016). The aim of this study is further characterization of LpAI:AII with respect to its role in atherogenesis. Association of LpAI, LpAI:AII and other HDL parameters with apoB-lipoprotein parameters was analyzed among the cohort data above. ApoA-I in LpAI negatively correlated with the apoB-lipoprotein parameters such as apoB, triglyceride, nonHDL-cholesterol, and nonHDL-cholesterol + triglyceride, which are apparently reflected in the relations of the total HDL parameters to apoB-lipoproteins. In contrast, apoA-I in LpAI:AII and apoA-II positively correlated to the apoB-lipoprotein parameters even within their small range of variation. These relationships are independent of sex, but may slightly be influenced by the activity-related CETP mutations. The study suggested that LpAI:AII is an atherogenic indicator rather than antiatherogenic. These sub-fractions of HDL are to be evaluated separately for estimating atherogenic risk of the patients.

  16. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  17. Estimation of Mass-Loss Rates from Emission Line Profiles in the UV Spectra of Cool Stars

    NASA Technical Reports Server (NTRS)

    Carpenter, K. G.; Robinson, R. D.; Harper, G. M.

    1999-01-01

    The photon-scattering winds of cool, low-gravity stars (K-M giants and supergiants) produce absorption features in the strong chromospheric emission lines. This provides us with an opportunity to assess important parameters of the wind, including flow and turbulent velocities, the optical depth of the wind above the region of photon creation, and the star's mass-loss rate. We have used the Lamers et al. Sobolev with Exact Integration (SEI) radiative transfer code along with simple models of the outer atmospheric structure to compute synthetic line profiles for comparison with the observed line profiles. The SEI code has the advantage of being computationally fast and allows a great number of possible wind models to be examined. We therefore use it here to obtain initial first-order estimates of the wind parameters. More sophisticated, but more time-consuming and resource intensive calculations will be performed at a later date, using the SEI-deduced wind parameters as a starting point. A comparison of the profiles over a range of wind velocity laws, turbulence values, and line opacities allows us to constrain the wind parameters, and to estimate the mass-loss rates. We have applied this analysis technique (using lines of Mg II, 0 I, and Fe II) so far to four stars: the normal K5-giant alpha Tau, the hybrid K-giant gamma Dra, the K5 supergiant lambda Vel, and the M-giant gamma Cru. We present in this paper a description of the technique, including the assumptions which go into its use, an assessment of its robustness, and the results of our analysis.

  18. Estimation of Mass-Loss Rate for M Giants from UV Emission Line Profiles

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.; Robinson, R. D.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    The photon-scattering winds of M giants produce absorption features in the strong chromospheric emission lines. These provide us with an opportunity to assess important parameters of the wind, including flow and turbulent velocities, the optical depth of the wind above the region of photon creation, and the star's mass-loss rate. We have used the Lamers et al. (1987) Sobolev with Exact Integration (SET) radiative transfer code, along with simple models of the outer atmospheric structure and wind, to determine the wind characteristics of two M-giant stars, gamma Cru (M3.4) and mu Gem (M3IIIab). The SET code has the advantage of being computationally fast and allows a great number of possible wind models to be examined. The analysis procedure involves specifying wind parameters and then using the program to calculate line profiles for the Mg II (UV1) lines and a range of unblended Fe II lines. These lines have a wide range of wind opacities and therefore probe different heights in the atmosphere. The assumed wind properties are iterated until the predicted profiles match the observations over as many lines as possible. We present estimates of the wind parameters for these stars and offer a comparison to wind properties previously-derived for low-gravity K stars using the same technique.

  19. Ground-based solar astrometric measurements during the PICARD mission

    NASA Astrophysics Data System (ADS)

    Irbah, A.; Meftah, M.; Corbard, T.; Ikhlef, R.; Morand, F.; Assus, P.; Fodil, M.; Lin, M.; Ducourt, E.; Lesueur, P.; Poiet, G.; Renaud, C.; Rouze, M.

    2011-11-01

    PICARD is a space mission developed mainly to study the geometry of the Sun. The satellite was launched in June 2010. The PICARD mission has a ground program which is based at the Calern Observatory (Observatoire de la C^ote d'Azur). It will allow recording simultaneous solar images from ground. Astrometric observations of the Sun using ground-based telescopes need however an accurate modelling of optical e®ects induced by atmospheric turbulence. Previous works have revealed a dependence of the Sun radius measurements with the observation conditions (Fried's parameter, atmospheric correlation time(s) ...). The ground instruments consist mainly in SODISM II, replica of the PICARD space instrument and MISOLFA, a generalized daytime seeing monitor. They are complemented by standard sun-photometers and a pyranometer for estimating a global sky quality index. MISOLFA is founded on the observation of Angle-of-Arrival (AA) °uctuations and allows us to analyze atmospheric turbulence optical e®ects on measurements performed by SODISM II. It gives estimations of the coherence parameters characterizing wave-fronts degraded by the atmospheric turbulence (Fried's parameter, size of the isoplanatic patch, the spatial coherence outer scale and atmospheric correlation times). This paper presents an overview of the ground based instruments of PICARD and some results obtained from observations performed at Calern observatory in 2011.

  20. “Transference Ratios” to Predict Total Oxidized Sulfur and Nitrogen Deposition – Part II, Modeling Results

    EPA Science Inventory

    The current study examines predictions of transference ratios and related modeled parameters for oxidized sulfur and oxidized nitrogen using five years (2002-2006) of 12-km grid cell-specific annual estimates from EPA’s Community Air Quality Model (CMAQ) for five selected sub-re...

  1. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  2. U.S. Workshop on the Physics and Chemistry of II-VI Materials (a.k.a. II-VI Workshop) - ARO Research Area 9: Materials Science - Physical Properties of Materials

    DTIC Science & Technology

    2015-12-21

    5.5: Evaluation of MBE-Grown MCT on GaAs for HOT Applications .................................................... 99 J. Wenisch, W. Schirmacher, R...on-p architecture and is well adapted for low flux detection or high operating temperature. This architecture has been evaluated for space...estimate the ER of the Hg1-xCdxTe in real time is described. In this work, the output parameters from the ICP etcher are evaluated for their correlation

  3. An adaptive observer for on-line tool wear estimation in turning, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Danai, Kourosh; Ulsoy, A. Galip

    1987-04-01

    On-line sensing of tool wear has been a long-standing goal of the manufacturing engineering community. In the absence of any reliable on-line tool wear sensors, a new model-based approach for tool wear estimation has been proposed. This approach is an adaptive observer, based on force measurement, which uses both parameter and state estimation techniques. The design of the adaptive observer is based upon a dynamic state model of tool wear in turning. This paper (Part I) presents the model, and explains its use as the basis for the adaptive observer design. This model uses flank wear and crater wear as state variables, feed as the input, and the cutting force as the output. The suitability of the model as the basis for adaptive observation is also verified. The implementation of the adaptive observer requires the design of a state observer and a parameter estimator. To obtain the model parameters for tuning the adaptive observer procedures for linearisation of the non-linear model are specified. The implementation of the adaptive observer in turning and experimental results are presented in a companion paper (Part II).

  4. Rapid estimation of high-parameter auditory-filter shapes

    PubMed Central

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  5. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  6. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  7. Estimation of water quality parameters of inland and coastal waters with the use of a toolkit for processing of remote sensing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekker, A.G.; Hoogenboom, H.J.; Rijkeboer, M.

    1997-06-01

    Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air/water interface correction, and application of water quality algorithms. A prototype software environment has recently been developed that enables the user to perform and control these processing steps. Main parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code for removing atmospheric and air-water interface influences, (ii) a tool for analyzing of algorithms for estimating water quality and (iii) a spectral database, containing apparent and inherent optical properties and associated water quality parameters.more » The use of the software is illustrated by applying implemented algorithms for estimating chlorophyll to data from a spectral library of Dutch inland waters with CHL ranging from 1 to 500 pg 1{sup -1}. The algorithms currently implemented in the Toolkit software are recommended for optically simple waters, but for optically complex waters development of more advanced retrieval methods is required.« less

  8. Bayesian Assessment of the Uncertainties of Estimates of a Conceptual Rainfall-Runoff Model Parameters

    NASA Astrophysics Data System (ADS)

    Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.

    2014-12-01

    This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.

  9. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    PubMed Central

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2018-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706

  10. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.

    PubMed

    Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong

    2016-03-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  11. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  12. Computational analysis of liquid chromatography-tandem mass spectrometric steroid profiling in NCI H295R cells following angiotensin II, forskolin and abiraterone treatment.

    PubMed

    Mangelis, Anastasios; Dieterich, Peter; Peitzsch, Mirko; Richter, Susan; Jühlen, Ramona; Hübner, Angela; Willenberg, Holger S; Deussen, Andreas; Lenders, Jacques W M; Eisenhofer, Graeme

    2016-01-01

    Adrenal steroid hormones, which regulate a plethora of physiological functions, are produced via tightly controlled pathways. Investigations of these pathways, based on experimental data, can be facilitated by computational modeling for calculations of metabolic rate alterations. We therefore used a model system, based on mass balance and mass reaction equations, to kinetically evaluate adrenal steroidogenesis in human adrenal cortex-derived NCI H295R cells. For this purpose a panel of 10 steroids was measured by liquid chromatographic-tandem mass spectrometry. Time-dependent changes in cell incubate concentrations of steroids - including cortisol, aldosterone, dehydroepiandrosterone and their precursors - were measured after incubation with angiotensin II, forskolin and abiraterone. Model parameters were estimated based on experimental data using weighted least square fitting. Time-dependent angiotensin II- and forskolin-induced changes were observed for incubate concentrations of precursor steroids with peaks that preceded maximal increases in aldosterone and cortisol. Inhibition of 17-alpha-hydroxylase/17,20-lyase with abiraterone resulted in increases in upstream precursor steroids and decreases in downstream products. Derived model parameters, including rate constants of enzymatic processes, appropriately quantified observed and expected changes in metabolic pathways at multiple conversion steps. Our data demonstrate limitations of single time point measurements and the importance of assessing pathway dynamics in studies of adrenal cortical cell line steroidogenesis. Our analysis provides a framework for evaluation of steroidogenesis in adrenal cortical cell culture systems and demonstrates that computational modeling-derived estimates of kinetic parameters are an effective tool for describing perturbations in associated metabolic pathways. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.

  14. Impact of Uncertainties in Meteorological Forcing Data and Land Surface Parameters on Global Estimates of Terrestrial Water Balance Components

    NASA Astrophysics Data System (ADS)

    Nasonova, O. N.; Gusev, Ye. M.; Kovalev, Ye. E.

    2009-04-01

    Global estimates of the components of terrestrial water balance depend on a technique of estimation and on the global observational data sets used for this purpose. Land surface modelling is an up-to-date and powerful tool for such estimates. However, the results of modelling are affected by the quality of both a model and input information (including meteorological forcing data and model parameters). The latter is based on available global data sets containing meteorological data, land-use information, and soil and vegetation characteristics. Now there are a lot of global data sets, which differ in spatial and temporal resolution, as well as in accuracy and reliability. Evidently, uncertainties in global data sets will influence the results of model simulations, but to which extent? The present work is an attempt to investigate this issue. The work is based on the land surface model SWAP (Soil Water - Atmosphere - Plants) and global 1-degree data sets on meteorological forcing data and the land surface parameters, provided within the framework of the Second Global Soil Wetness Project (GSWP-2). The 3-hourly near-surface meteorological data (for the period from 1 July 1982 to 31 December 1995) are based on reanalyses and gridded observational data used in the International Satellite Land-Surface Climatology Project (ISLSCP) Initiative II. Following the GSWP-2 strategy, we used a number of alternative global forcing data sets to perform different sensitivity experiments (with six alternative versions of precipitation, four versions of radiation, two pure reanalysis products and two fully hybridized products of meteorological data). To reveal the influence of model parameters on simulations, in addition to GSWP-2 parameter data sets, we produced two alternative global data sets with soil parameters on the basis of their relationships with the content of clay and sand in a soil. After this the sensitivity experiments with three different sets of parameters were performed. As a result, 16 variants of global annual estimates of water balance components were obtained. Application of alternative data sets on radiation, precipitation, and soil parameters allowed us to reveal the influence of uncertainties in input data on global estimates of water balance components.

  15. Optimization of Kinematic GPS Data Analysis for Large Surface Deformation from the July 2003 Dome Collapse at Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Medina, R. B.; Mattioli, G. S.; Braun, J.

    2013-12-01

    Several volcanic systems in the western US and Alaska (part of PBO) as well as the Soufriere Hills volcano on Montserrat (CALIPSO) have spatially dense continuous GPS networks that have been operating for close to a decade. Because GPS signals are affected during transmission through the atmosphere, it is important to resolve any contribution of atmospheric effects to apparent changes in position and therefore to obtain the best estimate of both. This is especially critical in the Caribbean or other tropical regions, where the effect of tropospheric water vapor is large as well as spatially and temporally variable. Several proximal cGPS sites (<10 km from the vent) collected data at 30 sec intervals during the 12-13 July 2003 eruption and massive dome collapse of Soufrière Hills Volcano (SHV). Data were originally processed treating the antennae as a kinematic buoy using GIPSY-OASIS-II (v. 5) and high-rate (30 s) final, precise orbit, clock, and earth orientation parameter products from JPL. In the original GOA-II analysis, the parameters for the random walk of the wet zenith delay, elevation cutoff, troposphere horizontal gradient and the rate of change of the random walk of position were kept at the default values suggested by JPL for precise kinematic positioning. After reviewing the position time-series, one GPS station, HERM recorded a maximum vertical displacement of -1.98 m from its mean, with negligible horizontal movement, rebounding within an hour. This estimate of vertical site displacement was an order of magnitude larger than those estimated at other sites on SHV. We report here our revised processing using GOA-II (v. 6.2), updated processing procedures, including the use of VMF1 grid files and APCs for the antenna/radome combinations, and newly released IGS08 data products from JPL. We have reprocessed all available cGPS from the July 2003 dome collapse event on SHV using a grid-search method to examine the appropriate stochastic atmosphere and position parameters to increase the precision of GPS position estimates during the eruption. BGGY, a station located 48 km northeast on Antigua, was used as a control to optimize the parameters for modeling the atmospheric variations more accurately for this type of environment, since BGGY is subjected to the similar weather patterns but was unaffected by volcanic activity at SHV. The final stochastic parameters were selected to yield the lowest variance in the kinematic position time-series at BGGY, then, HERM was reprocessed using the same parameters. The apparent vertical movement at HERM has been reduced substantially, and now has a maximum of 2.5 cm with a variation of 30 cm in the zenith wet troposphere estimate. We conclude that the original default parameters used to process that GPS observations over-constrained possible atmospheric variation for this tropical environment, producing apparently large dynamic position changes. Our new results now reflect actual dynamic ground deformation during the massive dome collapse and may be used to develop improved models for volcanic processes that occur over time scales of minutes to hours at SHV and other tropical volcanoes.

  16. Internal Variations in Empirical Oxygen Abundances for Giant H II Regions in the Galaxy NGC 2403

    NASA Astrophysics Data System (ADS)

    Mao, Ye-Wei; Lin, Lin; Kong, Xu

    2018-02-01

    This paper presents a spectroscopic investigation of 11 {{H}} {{II}} regions in the nearby galaxy NGC 2403. The {{H}} {{II}} regions are observed with a long-slit spectrograph mounted on the 2.16 m telescope at XingLong station of National Astronomical Observatories of China. For each of the {{H}} {{II}} regions, spectra are extracted at different nebular radii along the slit-coverage. Oxygen abundances are empirically estimated from the strong-line indices R23, N2O2, O3N2, and N2 for each spectrophotometric unit, with both observation- and model-based calibrations adopted into the derivation. Radial profiles of these diversely estimated abundances are drawn for each nebula. In the results, the oxygen abundances separately estimated with the prescriptions on the basis of observations and models, albeit from the same spectral index, systematically deviate from each other; at the same time, the spectral indices R23 and N2O2 are distributed with flat profiles, whereas N2 and O3N2 exhibit apparent gradients with the nebular radius. Because our study naturally samples various ionization levels, which inherently decline at larger radii within individual {{H}} {{II}} regions, the radial distributions indicate not only the robustness of R23 and N2O2 against ionization variations but also the sensitivity of N2 and O3N2 to the ionization parameter. The results in this paper provide observational corroboration of the theoretical prediction about the deviation in the empirical abundance diagnostics. Our future work is planned to investigate metal-poor {{H}} {{II}} regions with measurable T e, in an attempt to recalibrate the strong-line indices and consequently disclose the cause of the discrepancies between the empirical oxygen abundances.

  17. Borderline personality disorder subscale (Chinese version) of the structured clinical interview for DSM-IV axis II personality disorders: a validation study in Cantonese-speaking Hong Kong Chinese.

    PubMed

    Wong, H M; Chow, L Y

    2011-06-01

    Borderline personality disorder is an important but under-recognised clinical entity, for which there are only a few available diagnostic instruments in the Chinese language. None has been tested for its psychometric properties in the Cantonese-speaking population in Hong Kong. The present study aimed to assess the validity of the Chinese version of the Borderline Personality Disorder subscale of the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders Axis II Personality Disorders (SCID-II) in Cantonese-speaking Hong Kong Chinese. A convenience sampling method was used. The subjects were seen by a multidisciplinary clinical team, who arrived at a best-estimate diagnosis and then by application of the SCID-II rater using the Chinese version of the Borderline Personality Disorder subscale. The study was carried out at the psychiatric clinic of the Prince of Wales Hospital in Hong Kong. A total of 87 patients of Chinese ethnicity aged 18 to 64 years who attended the clinic in April 2007 were recruited. The aforementioned patient parameters were used to examine the internal consistency, best-estimate clinical diagnosis-SCID diagnosis agreement, sensitivity, and specificity of the Chinese version of the subscale. The Borderline Personality Disorder subscale (Chinese version) of SCID-II had an internal consistency of 0.82 (Cronbach's alpha coefficient), best-estimate clinical diagnosis-SCID diagnosis agreement of 0.82 (kappa), sensitivity of 0.92, and specificity of 0.94. The Borderline Personality Disorder subscale (Chinese version) of the SCID-II rater had reasonable validity when applied to Cantonese-speaking Chinese subjects in Hong Kong.

  18. Estimating chlorophyll content and photochemical yield of photosystem II (ΦPSII) using solar-induced chlorophyll fluorescence measurements at different growing stages of attached leaves

    PubMed Central

    Tubuxin, Bayaer; Rahimzadeh-Bajgiran, Parinaz; Ginnan, Yusaku; Hosoi, Fumiki; Omasa, Kenji

    2015-01-01

    This paper illustrates the possibility of measuring chlorophyll (Chl) content and Chl fluorescence parameters by the solar-induced Chl fluorescence (SIF) method using the Fraunhofer line depth (FLD) principle, and compares the results with the standard measurement methods. A high-spectral resolution HR2000+ and an ordinary USB4000 spectrometer were used to measure leaf reflectance under solar and artificial light, respectively, to estimate Chl fluorescence. Using leaves of Capsicum annuum cv. ‘Sven’ (paprika), the relationships between the Chl content and the steady-state Chl fluorescence near oxygen absorption bands of O2B (686nm) and O2A (760nm), measured under artificial and solar light at different growing stages of leaves, were evaluated. The Chl fluorescence yields of ΦF 686nm/ΦF 760nm ratios obtained from both methods correlated well with the Chl content (steady-state solar light: R2 = 0.73; artificial light: R2 = 0.94). The SIF method was less accurate for Chl content estimation when Chl content was high. The steady-state solar-induced Chl fluorescence yield ratio correlated very well with the artificial-light-induced one (R2 = 0.84). A new methodology is then presented to estimate photochemical yield of photosystem II (ΦPSII) from the SIF measurements, which was verified against the standard Chl fluorescence measurement method (pulse-amplitude modulated method). The high coefficient of determination (R2 = 0.74) between the ΦPSII of the two methods shows that photosynthesis process parameters can be successfully estimated using the presented methodology. PMID:26071530

  19. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in spring. In order to obtain a more comprehensive estimate of the model uncertainty, a second CLM ensemble was set up, where initial conditions and atmospheric forcings were perturbed in addition to the parameter estimates. This resulted in very high standard deviations (STD) of the modeled annual NEE sums for C3-grass and C3-crop PFTs, ranging between 24.1 and 225.9 gC m-2 y-1, compared to STD = 0.1 - 3.4 gC m-2 y-1 (effect of parameter uncertainty only, without additional perturbation of initial states and atmospheric forcings). The higher spread of modeled NEE for the C3-crop and C3-grass indicated that the model uncertainty was notably higher for those PFTs compared to the forest-PFTs. Our findings highlight the potential of parameter and uncertainty estimation to support the understanding and further development of land surface models such as CLM.

  20. Pressure-induced cooperative spin transition in ironII 2D coordination polymers: room-temperature visible spectroscopic study.

    PubMed

    Levchenko, G; Bukin, G V; Terekhov, S A; Gaspar, A B; Martínez, V; Muñoz, M C; Real, J A

    2011-06-30

    For the 2D coordination polymers [Fe(3-Fpy)(2)M(II)(CN)(4)] (M(II) = Ni, Pd, Pt), the pressure-induced spin crossover behavior has been investigated at 298 K by monitoring the distinct optical properties associated with each spin state. Cooperative first-order spin transition characterized by a piezohysteresis loop ca. 0.1 GPa wide was observed for the three derivatives. Application of the mean field regular solution theory has enabled estimation of the cooperative parameter, Γ(p), and the enthalpy, ΔH(HL)(p), associated with the spin transition for each derivative. These values, found in the intervals 6.8-7.9 and 18.6-20.8 kJ mol(-1), respectively, are consistent with those previously reported for thermally induced spin transition at constant pressure for the title compounds (Chem.-Eur. J.2009, 15, 10960). Relevance of the elastic energy, Δ(elast), as a corrective parameter accounting for the pressure dependence of the critical temperature of thermally induced spin transitions (Clausius-Clapeiron equation) is also demonstrated and discussed.

  1. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  2. On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal

    NASA Astrophysics Data System (ADS)

    Fortunelli, Alessandro; Painelli, Anna

    1997-05-01

    A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.

  3. Population Pharmacokinetic/Pharmacodynamic Analysis of Alirocumab in Healthy Volunteers or Hypercholesterolemic Subjects Using an Indirect Response Model to Predict Low-Density Lipoprotein Cholesterol Lowering: Support for a Biologics License Application Submission: Part II.

    PubMed

    Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David

    2018-05-03

    Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.

  4. Optimization and adsorption kinetic studies of aqueous manganese ion removal using chitin extracted from shells of edible Philippine crabs

    NASA Astrophysics Data System (ADS)

    Quimque, Mark Tristan J.; Jimenez, Marvin C.; Acas, Meg Ina S.; Indoc, Danrelle Keth L.; Gomez, Enjelyn C.; Tabuñag, Jenny Syl D.

    2017-01-01

    Manganese is a common contaminant in drinking water along with other metal pollutants. This paper investigates the use of chitin, extracted from crab shells obtained as restaurant throwaway, as an adsorbent in removing manganese ions from aqueous medium. In particular, this aims to optimize the adsorption parameters and look into the kinetics of the process. The adsorption experiments done in this study employed the batch equilibration method. In the optimization, the following parameters were considered: pH and concentration of Mn (II) sorbate solution, particle size and dosage of adsorbent chitin, and adsorbent-adsorbate contact time. At the optimal condition, the order of the adsorption reaction was estimated using kinetic models which describes the process best. It was found out that the adsorption of aqueous Mn (II) ions onto chitin obeys the pseudo-second order model. This model assumes that the adsorption occurred via chemisorption

  5. Theoretical study of the accuracy of the elution by characteristic points method for bi-langmuir isotherms.

    PubMed

    Ravald, L; Fornstedt, T

    2001-01-26

    The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.

  6. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  7. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.

  8. On the in vivo photochemical rate parameters for PDT reactive oxygen species modeling

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Ghogare, Ashwini A.; Greer, Alexander; Zhu, Timothy C.

    2017-03-01

    Photosensitizer photochemical parameters are crucial data in accurate dosimetry for photodynamic therapy (PDT) based on photochemical modeling. Progress has been made in the last few decades in determining the photochemical properties of commonly used photosensitizers (PS), but mostly in solution or in vitro. Recent developments allow for the estimation of some of these photochemical parameters in vivo. This review will cover the currently available in vivo photochemical properties of photosensitizers as well as the techniques for measuring those parameters. Furthermore, photochemical parameters that are independent of environmental factors or are universal for different photosensitizers will be examined. Most photosensitizers discussed in this review are of the type II (singlet oxygen) photooxidation category, although type I photosensitizers that involve other reactive oxygen species (ROS) will be discussed as well. The compilation of these parameters will be essential for ROS modeling of PDT.

  9. On the in-vivo photochemical rate parameters for PDT reactive oxygen species modeling

    PubMed Central

    Kim, Michele M.; Ghogare, Ashwini A.; Greer, Alexander; Zhu, Timothy C.

    2017-01-01

    Photosensitizer photochemical parameters are crucial data in accurate dosimetry for photodynamic therapy (PDT) based on photochemical modeling. Progress has been made in the last few decades in determining the photochemical properties of commonly used photosensitizers (PS), but mostly in solution or in-vitro. Recent developments allow for the estimation of some of these photochemical parameters in-vivo. This review will cover the currently available in-vivo photochemical properties of photosensitizers as well as the techniques for measuring those parameters. Furthermore, photochemical parameters that are independent of environmental factors or are universal for different photosensitizers will be examined. Most photosensitizers discussed in this review are of the type II (singlet oxygen) photooxidation category, although type I photosensitizers that involve other reactive oxygen species (ROS) will be discussed as well. The compilation of these parameters will be essential for ROS modeling of PDT. PMID:28166056

  10. Online estimation of the wavefront outer scale profile from adaptive optics telemetry

    NASA Astrophysics Data System (ADS)

    Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.

    2017-02-01

    We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.

  11. Adaptive Detection and Parameter Estimation for Multidimensional Signal Models

    DTIC Science & Technology

    1989-04-19

    first of Equations (3-3), it follows that H = fH (3-12) p BpP Moreover, with the help of Equations (Al-8) of Appendix I and Equation (3-6). we find that...7-29) 127 Substituting these results, we find that II + ZBSBBZB +Y T- YJ =+ Zi~t ÷ B SBR ZBI By introducing the definitions -t +BS1 ZB V E Y Ct

  12. Analysis of disconnected diallel mating designs II: results from a third generation progeny test of the New Zealand radiata pine improvement programme.

    Treesearch

    J.N. King; M.J. Carson; G.R. Johnson

    1998-01-01

    Genetic parameters from a second generation (F2) disconnected diallel progeny test of the New Zealand radiata pine improvement programme are presented. Heritability estimates of growth and yield traits of 0.2 are similar to progeny test results of the previous generation (F1) generation tests. A trend of declining dominance...

  13. Chairside CAD/CAM materials. Part 3: Cyclic fatigue parameters and lifetime predictions.

    PubMed

    Wendler, Michael; Belli, Renan; Valladares, Diana; Petschelt, Anselm; Lohbauer, Ulrich

    2018-06-01

    Chemical and mechanical degradation play a key role on the lifetime of dental restorative materials. Therefore, prediction of their long-term performance in the oral environment should base on fatigue, rather than inert strength data, as commonly observed in the dental material's field. The objective of the present study was to provide mechanistic fatigue parameters of current dental CAD/CAM materials under cyclic biaxial flexure and assess their suitability in predicting clinical fracture behaviors. Eight CAD/CAM materials, including polycrystalline zirconia (IPS e.max ZirCAD), reinforced glasses (Vitablocs Mark II, IPS Empress CAD), glass-ceramics (IPS e.max CAD, Suprinity PC, Celtra Duo), as well as hybrid materials (Enamic, Lava Ultimate) were evaluated. Rectangular plates (12×12×1.2mm 3 ) with highly polished surfaces were prepared and tested in biaxial cyclic fatigue in water until fracture using the Ball-on-Three-Balls (B3B) test. Cyclic fatigue parameters n and A* were obtained from the lifetime data for each material and further used to build SPT diagrams. The latter were used to compare in-vitro with in-vivo fracture distributions for IPS e.max CAD and IPS Empress CAD. Susceptibility to subcritical crack growth under cyclic loading was observed for all materials, being more severe (n≤20) in lithium-based glass-ceramics and Vitablocs Mark II. Strength degradations of 40% up to 60% were predicted after only 1 year of service. Threshold stress intensity factors (K th ) representing the onset of subcritical crack growth (SCG), were estimated to lie in the range of 0.37-0.44 of K Ic for the lithium-based glass-ceramics and Vitablocs Mark II and between 0.51-0.59 of K Ic for the other materials. Failure distributions associated with mechanistic estimations of strength degradation in-vitro showed to be useful in interpreting failure behavior in-vivo. The parameter K th stood out as a better predictor of clinical performance in detriment to the SCG n parameter. Fatigue parameters obtained from cyclic loading experiments are more reliable predictors of the mechanical performance of contemporary dental CAD/CAM restoratives than quasi-static mechanical properties. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  14. Automatic corn-soybean classification using Landsat MSS data. I - Near-harvest crop proportion estimation. II - Early season crop proportion estimation

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.

    1984-01-01

    The techniques used initially for the identification of cultivated crops from Landsat imagery depended greatly on the iterpretation of film products by a human analyst. This approach was not very effective and objective. Since 1978, new methods for crop identification are being developed. Badhwar et al. (1982) showed that multitemporal-multispectral data could be reduced to a simple feature space of alpha and beta and that these features would separate corn and soybean very well. However, there are disadvantages related to the use of alpha and beta parameters. The present investigation is concerned with a suitable method for extracting the required features. Attention is given to a profile model for crop discrimination, corn-soybean separation using profile parameters, and an automatic labeling (target recognition) method. The developed technique is extended to obtain a procedure which makes it possible to estimate the crop proportion of corn and soybean from Landsat data early in the growing season.

  15. An Experimental Investigation of Damage Resistances and Damage Tolerance of Composite Materials

    NASA Technical Reports Server (NTRS)

    Prabhakaran, R.

    2003-01-01

    The project included three lines of investigation, aimed at a better understanding of the damage resistance and damage tolerance of pultruded composites. The three lines of investigation were: (i) measurement of permanent dent depth after transverse indentation at different load levels, and correlation with other damage parameters such as damage area (from x-radiography) and back surface crack length, (ii) estimation of point stress and average stress characteristic dimensions corresponding to measured damage parameters, and (iii) an attempt to measure the damage area by a reflection photoelastic technique. All the three lines of investigation were pursued.

  16. Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation

    NASA Astrophysics Data System (ADS)

    Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.

    1997-02-01

    The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.

  17. Estimating the Biodegradability of Treated Sewage Samples Using Synchronous Fluorescence Spectra

    PubMed Central

    Lai, Tien M.; Shin, Jae-Ki; Hur, Jin

    2011-01-01

    Synchronous fluorescence spectra (SFS) and the first derivative spectra of the influent versus the effluent wastewater samples were compared and the use of fluorescence indices is suggested as a means to estimate the biodegradability of the effluent wastewater. Three distinct peaks were identified from the SFS of the effluent wastewater samples. Protein-like fluorescence (PLF) was reduced, whereas fulvic and/or humic-like fluorescence (HLF) were enhanced, suggesting that the two fluorescence characteristics may represent biodegradable and refractory components, respectively. Five fluorescence indices were selected for the biodegradability estimation based on the spectral features changing from the influent to the effluent. Among the selected indices, the relative distribution of PLF to the total fluorescence area of SFS (Index II) exhibited the highest correlation coefficient with total organic carbon (TOC)-based biodegradability, which was even higher than those obtained with the traditional oxygen demand-based parameters. A multiple regression analysis using Index II and the area ratio of PLF to HLF (Index III) demonstrated the enhancement of the correlations from 0.558 to 0.711 for TOC-based biodegradability. The multiple regression equation finally obtained was 0.148 × Index II − 4.964 × Index III − 0.001 and 0.046 × Index II − 1.128 × Index III + 0.026. The fluorescence indices proposed here are expected to be utilized for successful development of real-time monitoring using a simple fluorescence sensing device for the biodegradability of treated sewage. PMID:22164023

  18. Estimating the biodegradability of treated sewage samples using synchronous fluorescence spectra.

    PubMed

    Lai, Tien M; Shin, Jae-Ki; Hur, Jin

    2011-01-01

    Synchronous fluorescence spectra (SFS) and the first derivative spectra of the influent versus the effluent wastewater samples were compared and the use of fluorescence indices is suggested as a means to estimate the biodegradability of the effluent wastewater. Three distinct peaks were identified from the SFS of the effluent wastewater samples. Protein-like fluorescence (PLF) was reduced, whereas fulvic and/or humic-like fluorescence (HLF) were enhanced, suggesting that the two fluorescence characteristics may represent biodegradable and refractory components, respectively. Five fluorescence indices were selected for the biodegradability estimation based on the spectral features changing from the influent to the effluent. Among the selected indices, the relative distribution of PLF to the total fluorescence area of SFS (Index II) exhibited the highest correlation coefficient with total organic carbon (TOC)-based biodegradability, which was even higher than those obtained with the traditional oxygen demand-based parameters. A multiple regression analysis using Index II and the area ratio of PLF to HLF (Index III) demonstrated the enhancement of the correlations from 0.558 to 0.711 for TOC-based biodegradability. The multiple regression equation finally obtained was 0.148 × Index II - 4.964 × Index III - 0.001 and 0.046 × Index II - 1.128 × Index III + 0.026. The fluorescence indices proposed here are expected to be utilized for successful development of real-time monitoring using a simple fluorescence sensing device for the biodegradability of treated sewage.

  19. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  20. Transverse single spin asymmetry in e +p↑→e +J /ψ +X and Q2 evolution of Sivers function-II

    NASA Astrophysics Data System (ADS)

    Godbole, Rohini M.; Kaushik, Abhiram; Misra, Anuradha; Rawoot, Vaibhav S.

    2015-01-01

    We present estimates of single spin asymmetry in the electroproduction of J /ψ taking into account the transverse momentum-dependent (TMD) evolution of the gluon Sivers function. We estimate single spin asymmetry for JLab, HERMES, COMPASS and eRHIC energies using the color evaporation model of J /ψ . We have calculated the asymmetry using recent parameters extracted by Echevarria et al. using the Collins-Soper-Sterman approach to TMD evolution. These recent TMD evolution fits are based on the evolution kernel in which the perturbative part is resummed up to next-to-leading logarithmic accuracy. We have also estimated the asymmetry by using parameters which had been obtained by a fit by Anselmino et al., using both an exact numerical and an approximate analytical solution of the TMD evolution equations. We find that the variation among the different estimates obtained using TMD evolution is much smaller than between these on one hand and the estimates obtained using DGLAP evolution on the other. Even though the use of TMD evolution causes an overall reduction in asymmetries compared to the ones obtained without it, they remain sizable. Overall, upon use of TMD evolution, predictions for asymmetries stabilize.

  1. Time scale variation of MgII resonance lines of HD 41335 in UV region

    NASA Astrophysics Data System (ADS)

    Nikolaou, I.

    2012-01-01

    It is known that hot emission stars (Be and Oe) present peculiar and very complex spectral line profiles. Due to these perplexed lines that appear, it is difficult to actually fit a classical distribution to those physical profiles. Therefore many physical parameters of the regions, where these lines are created, can not be determined. In this paper, we study the Ultraviolet (UV) MgII (?? 2795.523, 2802.698 A) resonance lines of the HD 41335 star, at three different periods. Considering that these profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs), we use the Gauss-Rotation model (GR-model). From this analysis we can estimate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM), the column density and the absorbed energy of the independent regions of matter, which produce the main and the satellite components of the studied spectral lines. Eventually, we calculate the time scale variations of the above physical parameters.

  2. Biokinetic modelling development and analysis of arsenic dissolution into the gastrointestinal tract using SAAM II

    NASA Astrophysics Data System (ADS)

    Perama, Yasmin Mohd Idris; Siong, Khoo Kok

    2018-04-01

    A mathematical model comprising 8 compartments were designed to describe the kinetic dissolution of arsenic (As) from water leach purification (WLP) waste samples ingested into the gastrointestinal system. A totally reengineered software system named Simulation, Analysis and Modelling II (SAAM II) was employed to aid in the experimental design and data analysis. As a powerful tool that creates, simulate and analyze data accurately and rapidly, SAAM II computationally creates a system of ordinary differential equations according to the specified compartmental model structure and simulates the solutions based upon the parameter and model inputs provided. The experimental design of in vitro DIN approach was applied to create an artificial gastric and gastrointestinal fluids. These synthetic fluids assay were produced to determine the concentrations of As ingested into the gastrointestinal tract. The model outputs were created based upon the experimental inputs and the recommended fractional transfer rates parameter. As a result, the measured and predicted As concentrations in gastric fluids were much similar against the time of study. In contrast, the concentrations of As in the gastrointestinal fluids were only similar during the first hour and eventually started decreasing until the fifth hours of study between the measured and predicted values. This is due to the loss of As through the fractional transfer rates of q2 compartment to corresponding compartments of q3 and q5 which are involved with excretion and distribution to the whole body, respectively. The model outputs obtained after best fit to the data were influenced significantly by the fractional transfer rates between each compartment. Therefore, a series of compartmental model created with the association of fractional transfer rates parameter with the aid of SAAM II provides better estimation that simulate the kinetic behavior of As ingested into the gastrointestinal system.

  3. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y.S.; Beers, T.C.; Sivarani, T.

    2007-10-01

    The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parametermore » estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.« less

  4. Income dynamics with a stationary double Pareto distribution.

    PubMed

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presentsmore » algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.« less

  6. Comparison of Serum Levels of Endothelin-1 in Chronic Periodontitis Patients Before and After Treatment

    PubMed Central

    Varghese, Sheeja S; Sankari, M.; Jayakumar, ND.

    2017-01-01

    Introduction Endothelin-1 (ET-1) is a potent vasoconstrictive peptide with multi functional activity in various systemic diseases. Previous studies indicate the detection of ET-1 in gingival tissues and gingival crevicular fluid. Aim The aim of this study was to estimate the serum ET-1 levels in clinically healthy subjects and subjects with chronic periodontitis, before and after treatment, and correlate it with the clinical parameters. Materials and Methods A total of 44 patients were included in the study. Group I comprised of 20 subjects with clinically healthy periodontium. Group II comprised of 24 subjects with chronic periodontitis. Group III comprised of same Group II subjects following periodontal management. Serum samples were collected from the subjects and an Enzyme Linked Immunosorbent Assay (ELISA) was done to estimate the ET-1 levels. The ET-1 levels were then correlated among the three groups with the clinical parameters namely, Plaque Index (PI), Sulcus Bleeding Index (SBI), probing pocket depth, clinical attachment loss and Periodontally Inflamed Surface Area (PISA). The independent t-test and paired t-test were used for comparison of clinical parameters and Pearson’s correlation coefficient test was used for correlating the ET-1 levels. Results ET-1 levels in chronic periodontitis subjects were significantly higher compared to healthy subjects (p<0.001). However, the clinical parameters did not statistically correlate with the ET-1 levels. There was a significant decrease in ET-1 levels following treatment (p<0.001). Conclusion Serum ET-1 is increased in chronic periodontitis and reduces after periodontal therapy. Further studies are required to establish ET-1 as a biomarker for periodontal disease. PMID:28571268

  7. Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stucky, D.J.; Zoeller, A.L.

    1980-01-01

    Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less

  8. Evolution of a mini-scale biphasic dissolution model: Impact of model parameters on partitioning of dissolved API and modelling of in vivo-relevant kinetics.

    PubMed

    Locher, Kathrin; Borghardt, Jens M; Frank, Kerstin J; Kloft, Charlotte; Wagner, Karl G

    2016-08-01

    Biphasic dissolution models are proposed to have good predictive power for the in vivo absorption. The aim of this study was to improve our previously introduced mini-scale dissolution model to mimic in vivo situations more realistically and to increase the robustness of the experimental model. Six dissolved APIs (BCS II) were tested applying the improved mini-scale biphasic dissolution model (miBIdi-pH-II). The influence of experimental model parameters including various excipients, API concentrations, dual paddle and its rotation speed was investigated. The kinetics in the biphasic model was described applying a one- and four-compartment pharmacokinetic (PK) model. The improved biphasic dissolution model was robust related to differing APIs and excipient concentrations. The dual paddle guaranteed homogenous mixing in both phases; the optimal rotation speed was 25 and 75rpm for the aqueous and the octanol phase, respectively. A one-compartment PK model adequately characterised the data of fully dissolved APIs. A four-compartment PK model best quantified dissolution, precipitation, and partitioning also of undissolved amounts due to realistic pH profiles. The improved dissolution model is a powerful tool for investigating the interplay between dissolution, precipitation and partitioning of various poorly soluble APIs (BCS II). In vivo-relevant PK parameters could be estimated applying respective PK models. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Optimization of Transmit Parameters in Cardiac Strain Imaging With Full and Partial Aperture Coherent Compounding.

    PubMed

    Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E

    2018-05-01

    Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).

  11. A variable temperature EPR study of Mn(2+)-doped NH(4)Cl(0.9)I(0.1) single crystal at 170 GHz: zero-field splitting parameter and its absolute sign.

    PubMed

    Misra, Sushil K; Andronenko, Serguei I; Chand, Prem; Earle, Keith A; Paschenko, Sergei V; Freed, Jack H

    2005-06-01

    EPR measurements have been carried out on a single crystal of Mn(2+)-doped NH(4)Cl(0.9)I(0.1) at 170-GHz in the temperature range of 312-4.2K. The spectra have been analyzed (i) to estimate the spin-Hamiltonian parameters; (ii) to study the temperature variation of the zero-field splitting (ZFS) parameter; (iii) to confirm the negative absolute sign of the ZFS parameter unequivocally from the temperature-dependent relative intensities of hyperfine sextets at temperatures below 10K; and (iv) to detect the occurrence of a structural phase transition at 4.35K from the change in the structure of the EPR lines with temperature below 10K.

  12. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    PubMed

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  14. Density-based global sensitivity analysis of sheet-flow travel time: Kinematic wave-based formulations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2018-04-01

    Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as flow moves downgradient that is affected by one or more factors including high excess rainfall intensities, low flow resistance, high degrees of imperviousness, long surfaces, steep slope, and domination of rainfall distribution as NRCS Type I, II, or III.

  15. High resolution modelling of soil moisture patterns with TerrSysMP: A comparison with sensor network data

    NASA Astrophysics Data System (ADS)

    Gebler, S.; Hendricks Franssen, H.-J.; Kollet, S. J.; Qu, W.; Vereecken, H.

    2017-04-01

    The prediction of the spatial and temporal variability of land surface states and fluxes with land surface models at high spatial resolution is still a challenge. This study compares simulation results using TerrSysMP including a 3D variably saturated groundwater flow model (ParFlow) coupled to the Community Land Model (CLM) of a 38 ha managed grassland head-water catchment in the Eifel (Germany), with soil water content (SWC) measurements from a wireless sensor network, actual evapotranspiration recorded by lysimeters and eddy covariance stations and discharge observations. TerrSysMP was discretized with a 10 × 10 m lateral resolution, variable vertical resolution (0.025-0.575 m), and the following parameterization strategies of the subsurface soil hydraulic parameters: (i) completely homogeneous, (ii) homogeneous parameters for different soil horizons, (iii) different parameters for each soil unit and soil horizon and (iv) heterogeneous stochastic realizations. Hydraulic conductivity and Mualem-Van Genuchten parameters in these simulations were sampled from probability density functions, constructed from either (i) soil texture measurements and Rosetta pedotransfer functions (ROS), or (ii) estimated soil hydraulic parameters by 1D inverse modelling using shuffle complex evolution (SCE). The results indicate that the spatial variability of SWC at the scale of a small headwater catchment is dominated by topography and spatially heterogeneous soil hydraulic parameters. The spatial variability of the soil water content thereby increases as a function of heterogeneity of soil hydraulic parameters. For lower levels of complexity, spatial variability of the SWC was underrepresented in particular for the ROS-simulations. Whereas all model simulations were able to reproduce the seasonal evapotranspiration variability, the poor discharge simulations with high model bias are likely related to short-term ET dynamics and the lack of information about bedrock characteristics and an on-site drainage system in the uncalibrated model. In general, simulation performance was better for the SCE setups. The SCE-simulations had a higher inverse air entry parameter resulting in SWC dynamics in better correspondence with data than the ROS simulations during dry periods. This illustrates that small scale measurements of soil hydraulic parameters cannot be transferred to the larger scale and that interpolated 1D inverse parameter estimates result in an acceptable performance for the catchment.

  16. VizieR Online Data Catalog: Spectroscopic analysis of 348 red giants (Zielinski+, 2012)

    NASA Astrophysics Data System (ADS)

    Zielinski, P.; Niedzielski, A.; Wolszczan, A.; Adamow, M.; Nowak, G.

    2012-10-01

    The atmospheric parameters were derived using a strictly spectroscopic method based on the LTE analysis of equivalent widths of FeI and FeII lines. With existing photometric data and the Hipparcos parallaxes, we estimated stellar masses and ages via evolutionary tracks fitting. The stellar radii were calculated from either estimated masses and the spectroscopic logg or from the spectroscopic Teff and estimated luminosities. The absolute radial velocities were obtained by cross-correlating spectra with a numerical template. Our high-quality, high-resolution optical spectra have been collected since 2004 with the Hobby-Eberly Telescope (HET), located in the McDonald Observatory. The telescope was equipped with the High Resolution Spectrograph (HRS; R~60000 resolution). (2 data files).

  17. Impact of the HERA I+II combined data on the CT14 QCD global analysis

    NASA Astrophysics Data System (ADS)

    Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.

    2016-11-01

    A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.

  18. Confirmation of model-based dose selection for a Japanese phase III study of rivaroxaban in non-valvular atrial fibrillation patients.

    PubMed

    Kaneko, Masato; Tanigawa, Takahiko; Hashizume, Kensei; Kajikawa, Mariko; Tajiri, Masahiro; Mueck, Wolfgang

    2013-01-01

    This study was designed to confirm the appropriateness of the dose setting for a Japanese phase III study of rivaroxaban in patients with non-valvular atrial fibrillation (NVAF), which had been based on model simulation employing phase II study data. The previously developed mixed-effects pharmacokinetic/pharmacodynamic (PK-PD) model, which consisted of an oral one-compartment model parameterized in terms of clearance, volume and a first-order absorption rate, was rebuilt and optimized using the data for 597 subjects from the Japanese phase III study, J-ROCKET AF. A mixed-effects modeling technique in NONMEM was used to quantify both unexplained inter-individual variability and inter-occasion variability, which are random effect parameters. The final PK and PK-PD models were evaluated to identify influential covariates. The empirical Bayes estimates of AUC and C(max) from the final PK model were consistent with the simulated results from the Japanese phase II study. There was no clear relationship between individual estimated exposures and safety-related events, and the estimated exposure levels were consistent with the global phase III data. Therefore, it was concluded that the dose selected for the phase III study with Japanese NVAF patients by means of model simulation employing phase II study data had been appropriate from the PK-PD perspective.

  19. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  20. Evaluation of SAGE II and Balloon-Borne Stratospheric Aerosol Measurements: Evaluation of Aerosol Measurements from SAGE II, HALOE, and Balloonborne Optical Particle Counters

    NASA Technical Reports Server (NTRS)

    Hervig, Mark; Deshler, Terry; Moddrea, G. (Technical Monitor)

    2002-01-01

    Stratospheric aerosol measurements from the University of Wyoming balloonborne optical particle counters (OPCs), the Stratospheric Aerosol and Gas Experiment (SAGE) II, and the Halogen Occultation Experiment (HALOE) were compared in the period 1982-2000, when measurements were available. The OPCs measure aerosol size distributions, and HALOE multiwavelength (2.45-5.26 micrometers) extinction measurements can be used to retrieve aerosol size distributions. Aerosol extinctions at the SAGE II wavelengths (0.386-1.02 micrometers) were computed from these size distributions and compared to SAGE II measurements. In addition, surface areas derived from all three experiments were compared. While the overall impression from these results is encouraging, the agreement can change with latitude, altitude, time, and parameter. In the broadest sense, these comparisons fall into two categories: high aerosol loading (volcanic periods) and low aerosol loading (background periods and altitudes above 25 km). When the aerosol amount was low, SAGE II and HALOE extinctions were higher than the OPC estimates, while the SAGE II surface areas were lower than HALOE and the OPCS. Under high loading conditions all three instruments mutually agree to within 50%.

  1. Long-rising Type II supernovae from Palomar Transient Factory and Caltech Core-Collapse Project

    DOE PAGES

    Taddia, Francesco; Sollerman, J.; Fremling, C.; ...

    2016-03-09

    Context. Supernova (SN) 1987A was a peculiar hydrogen-rich event with a long-rising (~84 d) light curve, stemming from the explosion of a compact blue supergiant star. Only a few similar events have been presented in the literature in recent decades. Aims. We present new data for a sample of six long-rising Type II SNe (SNe II), three of which were discovered and observed by the Palomar Transient Factory (PTF) and three observed by the Caltech Core-Collapse Project (CCCP). Our aim is to enlarge this small family of long-rising SNe II, characterizing their differences in terms of progenitor and explosion parameters.more » We also study the metallicity of their environments. Methods. Optical light curves, spectra, and host-galaxy properties of these SNe are presented and analyzed. Detailed comparisons with known SN 1987A-like events in the literature are shown, with particular emphasis on the absolute magnitudes, colors, expansion velocities, and host-galaxy metallicities. Bolometric properties are derived from the multiband light curves. By modeling the early-time emission with scaling relations derived from the SuperNova Explosion Code (SNEC) models of MESA progenitor stars, we estimate the progenitor radii of these transients. The modeling of the bolometric light curves also allows us to estimate other progenitor and explosion parameters, such as the ejected 56Ni mass, the explosion energy, and the ejecta mass. Results. We present PTF12kso, a long-rising SN II that is estimated to have the largest amount of ejected 56Ni mass measured for this class. PTF09gpn and PTF12kso are found at the lowest host metallicities observed for this SN group. The variety of early light-curve luminosities depends on the wide range of progenitor radii of these SNe, from a few tens of R ⊙ (SN 2005ci) up to thousands (SN 2004ek) with some intermediate cases between 100 R ⊙ (PTF09gpn) and 300 R ⊙ (SN 2004em). Conclusions. We confirm that long-rising SNe II with light-curve shapes closely resembling that of SN 1987A generally arise from blue supergiant (BSG) stars. However, some of them, such as SN 2004em, likely have progenitors with larger radii (~300 R ⊙, typical of yellow supergiants) and can thus be regarded as intermediate cases between normal SNe IIP and SN 1987A-like SNe. Some extended red supergiant (RSG) stars such as the progenitor of SN 2004ek can also produce long-rising SNe II if they synthesized a large amount of 56Ni in the explosion. Lastly, low host metallicity is confirmed as a characteristic of the SNe arising from compact BSG stars.« less

  2. Prediction of Petermann I and II Spot Sizes for Single-mode Dispersion-shifted and Dispersion-flattened Fibers by a Simple Technique

    NASA Astrophysics Data System (ADS)

    Kamila, Kiranmay; Panda, Anup Kumar; Gangopadhyay, Sankar

    2013-09-01

    Employing the series expression for the fundamental modal field of dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers, we present simple but accurate analytical expressions for Petermann I and II spot sizes of such kind of fibers. Choosing some typical dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers as examples, we show that our estimations match excellently with the exact numerical results. The evaluation of the concerned propagation parameters by our formalism needs very little computations. This accurate but simple formalism will benefit the system engineers working in the field of all optical technology.

  3. Bridging the Global Precipitation and Soil Moisture Active Passive Missions: Variability of Microwave Surface Emissivity from In situ and Remote Sensing Perspectives

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Kirstetter, P.; Hong, Y.; Turk, J.

    2016-12-01

    The overland precipitation retrievals from satellite passive microwave (PMW) sensors such as the Global Precipitation Mission (GPM) microwave imager (GMI) are impacted by the land surface emissivity. The estimation of PMW emissivity faces challenges because it is highly variable under the influence of surface properties such as soil moisture, surface roughness and vegetation. This study proposes an improved quantitative understanding of the relationship between the emissivity and surface parameters. Surface parameter information is obtained through (i) in-situ measurements from the International Soil Moisture Network and (ii) satellite measurements from the Soil Moisture Active and Passive mission (SMAP) which provides global scale soil moisture estimates. The variation of emissivity is quantified with soil moisture, surface temperature and vegetation at various frequencies/polarization and over different types of land surfaces to sheds light into the processes governing the emission of the land. This analysis is used to estimate the emissivity under rainy conditions. The framework built with in-situ measurements serves as a benchmark for satellite-based analyses, which paves a way toward global scale emissivity estimates using SMAP.

  4. Sampling for Air Chemical Emissions from the Life Sciences Laboratory II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, Marcel Y.; Lindberg, Michael J.

    Sampling for air chemical emissions from the Life Science Laboratory II (LSL-II) ventilation stack was performed in an effort to determine potential exposure of maintenance staff to laboratory exhaust on the building roof. The concern about worker exposure was raised in December 2015 and several activities were performed to assist in estimating exposure concentrations. Data quality objectives were developed to determine the need for and scope and parameters of a sampling campaign to measure chemical emissions from research and development activities to the outside air. The activities provided data on temporal variation of air chemical concentrations and a basis formore » evaluating calculated emissions. Sampling for air chemical emissions was performed in the LSL-II ventilation stack over the 6-week period from July 26 to September 1, 2016. A total of 12 sampling events were carried out using 16 sample media. Resulting analysis provided concentration data on 49 analytes. All results were below occupational exposure limits and most results were below detection limits. When compared to calculated emissions, only 5 of the 49 chemicals had measured concentrations greater than predicted. This sampling effort will inform other study components to develop a more complete picture of a worker’s potential exposure from LSL-II rooftop activities. Mixing studies were conducted to inform spatial variation in concentrations at other rooftop locations and can be used in conjunction with these results to provide temporal variations in concentrations for estimating the potential exposure to workers working in and around the LSL-II stack.« less

  5. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    PubMed

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  6. SN 2012ec: mass of the progenitor from PESSTO follow-up of the photospheric phase

    NASA Astrophysics Data System (ADS)

    Barbarino, C.; Dall'Ora, M.; Botticella, M. T.; Della Valle, M.; Zampieri, L.; Maund, J. R.; Pumo, M. L.; Jerkstrand, A.; Benetti, S.; Elias-Rosa, N.; Fraser, M.; Gal-Yam, A.; Hamuy, M.; Inserra, C.; Knapic, C.; LaCluyze, A. P.; Molinaro, M.; Ochner, P.; Pastorello, A.; Pignata, G.; Reichart, D. E.; Ries, C.; Riffeser, A.; Schmidt, B.; Schmidt, M.; Smareglia, R.; Smartt, S. J.; Smith, K.; Sollerman, J.; Sullivan, M.; Tomasella, L.; Turatto, M.; Valenti, S.; Yaron, O.; Young, D.

    2015-04-01

    We present the results of a photometric and spectroscopic monitoring campaign of SN 2012ec, which exploded in the spiral galaxy NGC 1084, during the photospheric phase. The photometric light curve exhibits a plateau with luminosity L = 0.9 × 1042 erg s-1 and duration ˜90 d, which is somewhat shorter than standard Type II-P supernovae (SNe). We estimate the nickel mass M(56Ni) = 0.040 ± 0.015 M⊙ from the luminosity at the beginning of the radioactive tail of the light curve. The explosion parameters of SN 2012ec were estimated from the comparison of the bolometric light curve and the observed temperature and velocity evolution of the ejecta with predictions from hydrodynamical models. We derived an envelope mass of 12.6 M⊙, an initial progenitor radius of 1.6 × 1013 cm and an explosion energy of 1.2 foe. These estimates agree with an independent study of the progenitor star identified in pre-explosion images, for which an initial mass of M = 14-22 M⊙ was determined. We have applied the same analysis to two other Type II-P SNe (SNe 2012aw and 2012A), and carried out a comparison with the properties of SN 2012ec derived in this paper. We find a reasonable agreement between the masses of the progenitors obtained from pre-explosion images and masses derived from hydrodynamical models. We estimate the distance to SN 2012ec with the standardized candle method (SCM) and compare it with other estimates based on other primary and secondary indicators. SNe 2012A, 2012aw and 2012ec all follow the standard relations for the SCM for the use of Type II-P SNe as distance indicators.

  7. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  8. Detecting the Lμ-Lτ gauge boson at Belle II

    NASA Astrophysics Data System (ADS)

    Araki, Takeshi; Hoshino, Shihori; Ota, Toshihiko; Sato, Joe; Shimomura, Takashi

    2017-03-01

    We discuss the feasibility of detecting the gauge boson of the U (1 )Lμ-Lτ symmetry, which possesses a mass in the range between MeV and GeV, at the Belle-II experiment. The kinetic mixing between the new gauge boson Z' and photon is forbidden at the tree level and is radiatively induced. The leptonic force mediated by such a light boson is motivated by the discrepancy in muon anomalous magnetic moment and also the gap in the energy spectrum of cosmic neutrino. Defining the process e+e-→γ Z'→γ ν ν ¯ (missing energy) to be the signal, we estimate the numbers of the signal and the background events and show the parameter region to which the Belle-II experiment will be sensitive. The signal process in the Lμ-Lτ model is enhanced with a light Z', which is a characteristic feature differing from the dark photon models with a constant kinetic mixing. We find that the Belle-II experiment with the design luminosity will be sensitive to the Z' with the mass MZ'≲1 GeV and the new gauge coupling gZ'≳8 ×10-4 , which covers a half of the unconstrained parameter region that explains the discrepancy in muon anomalous magnetic moment. The possibilities to improve the significance of the detection are also discussed.

  9. The contribution of NOAA/CMDL ground-based measurements to understanding long-term stratospheric changes

    NASA Astrophysics Data System (ADS)

    Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.

    2005-05-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  10. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.

  11. Fast spacecraft adaptive attitude tracking control through immersion and invariance design

    NASA Astrophysics Data System (ADS)

    Wen, Haowei; Yue, Xiaokui; Li, Peng; Yuan, Jianping

    2017-10-01

    This paper presents a novel non-certainty-equivalence adaptive control method for the attitude tracking control problem of spacecraft with inertia uncertainties. The proposed immersion and invariance (I&I) based adaptation law provides a more direct and flexible approach to circumvent the limitations of the basic I&I method without employing any filter signal. By virtue of the adaptation high-gain equivalence property derived from the proposed adaptive method, the closed-loop adaptive system with a low adaptation gain could recover the high adaptation gain performance of the filter-based I&I method, and the resulting control torque demands during the initial transient has been significantly reduced. A special feature of this method is that the convergence of the parameter estimation error has been observably improved by utilizing an adaptation gain matrix instead of a single adaptation gain value. Numerical simulations are presented to highlight the various benefits of the proposed method compared with the certainty-equivalence-based control method and filter-based I&I control schemes.

  12. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  13. Escherichia coli Survival in, and Release from, White-Tailed Deer Feces

    PubMed Central

    Fry, Jessica; Ives, Rebecca L.; Rose, Joan B.

    2014-01-01

    White-tailed deer are an important reservoir for pathogens that can contribute a large portion of microbial pollution in fragmented agricultural and forest landscapes. The scarcity of experimental data on survival of microorganisms in and release from deer feces makes prediction of their fate and transport less reliable and development of efficient strategies for environment protection more difficult. The goal of this study was to estimate parameters for modeling Escherichia coli survival in and release from deer (Odocoileus virginianus) feces. Our objectives were as follows: (i) to measure survival of E. coli in deer pellets at different temperatures, (ii) to measure kinetics of E. coli release from deer pellets at different rainfall intensities, and (iii) to estimate parameters of models describing survival and release of microorganisms from deer feces. Laboratory experiments were conducted to study E. coli survival in deer pellets at three temperatures and to estimate parameters of Chick's exponential model with temperature correction based on the Arrhenius equation. Kinetics of E. coli release from deer pellets were measured at two rainfall intensities and used to derive the parameters of Bradford-Schijven model of bacterial release. The results showed that parameters of the survival and release models obtained for E. coli in this study substantially differed from those obtained by using other source materials, e.g., feces of domestic animals and manures. This emphasizes the necessity of comprehensive studies of survival of naturally occurring populations of microorganisms in and release from wildlife animal feces in order to achieve better predictions of microbial fate and transport in fragmented agricultural and forest landscapes. PMID:25480751

  14. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  15. Theoretical Advances in Sequential Data Assimilation for the Atmosphere and Oceans

    NASA Astrophysics Data System (ADS)

    Ghil, M.

    2007-05-01

    We concentrate here on two aspects of advanced Kalman--filter-related methods: (i) the stability of the forecast- assimilation cycle, and (ii) parameter estimation for the coupled ocean-atmosphere system. The nonlinear stability of a prediction-assimilation system guarantees the uniqueness of the sequentially estimated solutions in the presence of partial and inaccurate observations, distributed in space and time; this stability is shown to be a necessary condition for the convergence of the state estimates to the true evolution of the turbulent flow. The stability properties of the governing nonlinear equations and of several data assimilation systems are studied by computing the spectrum of the associated Lyapunov exponents. These ideas are applied to a simple and an intermediate model of atmospheric variability and we show that the degree of stabilization depends on the type and distribution of the observations, as well as on the data assimilation method. These results represent joint work with A. Carrassi, A. Trevisan and F. Uboldi. Much is known by now about the main physical mechanisms that give rise to and modulate the El-Nino/Southern- Oscillation (ENSO), but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean-atmosphere model of ENSO. Model behavior is very sensitive to two key parameters: (a) "mu", the ocean-atmosphere coupling coefficient between the sea-surface temperature (SST) and wind stress anomalies; and (b) "delta-s", the surface-layer coefficient. Previous work has shown that "delta- s" determines the period of the model's self-sustained oscillation, while "mu' measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Assimilation of SST data from the NCEP- NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean-atmosphere GCMs will be discussed. These results arise from joint work with D. Kondrashov and C.-j. Sun.

  16. Atomic Force Microscopy for Investigation of Ribosome-inactivating Proteins' Type II Tetramerization

    NASA Astrophysics Data System (ADS)

    Savvateev, M.; Kozlovskaya, N.; Moisenovich, M.; Tonevitsky, A.; Agapov, I.; Maluchenko, N.; Bykov, V.; Kirpichnikov, M.

    2003-12-01

    Biology of the toxins violently depends on their carbohydrate-binding centres' organization. Toxin tetramerization can lead to both increasing of lectin-binding centres' number and changes in their structural organization. A number and three-dimensional localization of such centres per one molecule strongly influence on toxins' biological properties. Ricin was used to obtain the AFM images of natural dimeric RIPsII structures as far as ricinus agglutinin was used for achievement of AFM images of natural tetrameric RIPsII forms. It is well-known that viscumin (60 kDa) has a property to form tetrameric structures dependently on ambient conditions and its concentration. Usage of the model dimer-tetramer based on ricin-agglutinin allowed to identify viscumin tetramers in AFM scans and to differ them from dimeric viscumin structures. Quantification analysis produced with the NT-MDT software allowed to estimate the geometrical parameters of ricin, ricinus agglutinin and viscumin molecules.

  17. The NASA F-15 Intelligent Flight Control Systems: Generation II

    NASA Technical Reports Server (NTRS)

    Buschbacher, Mark; Bosworth, John

    2006-01-01

    The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.

  18. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  19. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  20. An independent determination of the local Hubble constant

    NASA Astrophysics Data System (ADS)

    Fernández Arenas, David; Terlevich, Elena; Terlevich, Roberto; Melnick, Jorge; Chávez, Ricardo; Bresolin, Fabio; Telles, Eduardo; Plionis, Manolis; Basilakos, Spyros

    2018-02-01

    The relationship between the integrated H β line luminosity and the velocity dispersion of the ionized gas of H II galaxies and giant H II regions represents an exciting standard candle that presently can be used up to redshifts z ˜ 4. Locally it is used to obtain precise measurements of the Hubble constant by combining the slope of the relation obtained from nearby (z ≤ 0.2) H II galaxies with the zero-point determined from giant H II regions belonging to an `anchor sample' of galaxies for which accurate redshift-independent distance moduli are available. We present new data for 36 giant H II regions in 13 galaxies of the anchor sample that includes the megamaser galaxy NGC 4258. Our data are the result of the first 4 yr of observation of our primary sample of 130 giant H II regions in 73 galaxies with Cepheid determined distances. Our best estimate of the Hubble parameter is 71.0 ± 2.8(random) ± 2.1(systematic) km s- 1Mpc- 1. This result is the product of an independent approach and, although at present less precise than the latest SNIa results, it is amenable to substantial improvement.

  1. Missile Manufacturing Technology Conference Held at Hilton Head Island, South Carolina on 22-26 September 1975. Panel Presentations: Guidance

    DTIC Science & Technology

    1975-01-01

    Instead of the current three. Some de - tail on each component follows. II. POTENTIAL MANUFACTURING TECHNOLOGY PROJECTS Gyro Because of the...ranges of environment. With Imbedded microprocessors. It Is possible that parameters, once de - fined, can be placed within the microprocessor memory...Project cost: $53,000 Estimated duration of the project Is nine months. Benefits: Benefits to be de :ved from this project are a reduction

  2. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  3. Urban flood simulation and prioritization of critical urban sub-catchments using SWMM model and PROMETHEE II approach

    NASA Astrophysics Data System (ADS)

    Babaei, Sahar; Ghazavi, Reza; Erfanian, Mahdi

    2018-06-01

    Urban runoff increased due to augment of impervious surfaces. In order to flood mitigation during rainy season, determination of critical urban sub-catchments is very important for urban planners. Due to lack of information, adopting a simulation approach is one of the practical ways to identify the surcharged junctions and critical sub-catchments. Occurrence of destructive floods in the rainy seasons indicates the inappropriateness of the urban drainage system in Urmia. The main aims of this study were to estimate the surface runoff of urban sub-catchments using SWMM, to evaluate the accuracy of the drainage system of the study urban area and to prioritize sub-catchments using PROMETHEE II approach and SWMM. In the present study, the occurrence of rainfall event of the Urmia city (West Azerbaijan province, Iran) used for estimation of runoff depth. The study area was divided into 22 sub-catchments. For calibration and validation of model parameters, 3 rainfall events and their related runoff were measured. According to sensitivity analysis CN was the most sensitive parameter for model calibration. Amount of surcharged conduits and junctions indicates that the drainage system of the study area has not enough capacity for converting of the runoff and. For 10 year return period, depth of channels should increase by 20% for prevention of flooding in these sub-catchments. Sub-catchments were prioritized using PROMETHEE II approach and its results were compared with SWMM simulation outcomes. Based on SWMM simulation, S11, S7, S18, S16 and S1 sub-catchments are more critical sub-catchments respectively, while according to PROMETHEE method, S1, S11, S16, S14 and S18 are determined as the critical areas.

  4. Evaluation of SAGE II and Balloon-Borne Stratospheric Aerosol Measurements

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Under funding from this proposal we evaluated measurements of stratospheric sulfate aerosols from three platforms. Two were satellite platforms providing solar extinction measurements, the Stratospheric Aerosol and Gas Experiment (SAGE) II using wavelengths from 0.386 - 1.02 microns, and the Halogen Occultation Experiment (HALOE) using wavelengths from 2.45 to 5.26 microns. The third set of measurements was from in situ sampling by balloonborne optical particle counters (OPCs). The goal was to determine the consistency among these data sets. This was accomplished through analysis of the existing measurement records, and through additional balloonborne OPC flights coinciding with new SAGE II observations over Laramie, Wyoming. All analyses used the SAGE II v 6.0 data. This project supported two balloon flights per year over Laramie dedicated to SAGE II coincidence. Because logistical factors, such as poor surface weather or unfavorable payload impact location, can make it difficult to routinely obtain close coincidences with SAGE II, we attempt to conduct nearly every Laramie flight (roughly one per month) in conjunction with a SAGE II overpass. The Laramie flight frequency has varied over the years depending on field commitments and funding sources. Current support for the Laramie measurements is from the National Science Foundation in addition to support from this NASA grant. We have also completed a variety of comparisons using aerosol measurements from SAGE II, OPCs, and HALOE. The instruments were compared for their various estimates of aerosol extinction at the SAGE II wavelengths and for aerosol surface area. Additional results, such as illustrated here, can be found in a recently accepted manuscript describing comparisons between SAGE II, HALOE, and OPCs for the period 1982 - 2000. While overall, the impression from these results is encouraging, the agreement of the measurements changes with latitude, altitude, time, and parameter. In the broadest sense, these comparisons fall into two categories: high aerosol loading (volcanic periods) and low aerosol loading (background periods and altitudes above 25 km). When the aerosol amount is low SAGE II and HALOE extinctions are higher than the OPC estimates, while the SAGE II surface areas are lower than HALOE and the OPCS. Under high loading conditions, all three instruments mutually agree to within 50%.

  5. Avalanche weak layer shear fracture parameters from the cohesive crack model

    NASA Astrophysics Data System (ADS)

    McClung, David

    2014-05-01

    Dry slab avalanches release by mode II shear fracture within thin weak layers under cohesive snow slabs. The important fracture parameters include: nominal shear strength, mode II fracture toughness and mode II fracture energy. Alpine snow is not an elastic material unless the rate of deformation is very high. For natural avalanche release, it would not be possible that the fracture parameters can be considered as from classical fracture mechanics from an elastic framework. The strong rate dependence of alpine snow implies that it is a quasi-brittle material (Bažant et al., 2003) with an important size effect on nominal shear strength. Further, the rate of deformation for release of an avalanche is unknown, so it is not possible to calculate the fracture parameters for avalanche release from any model which requires the effective elastic modulus. The cohesive crack model does not require the modulus to be known to estimate the fracture energy. In this paper, the cohesive crack model was used to calculate the mode II fracture energy as a function of a brittleness number and nominal shear strength values calculated from slab avalanche fracture line data (60 with natural triggers; 191 with a mix of triggers). The brittleness number models the ratio of the approximate peak value of shear strength to nominal shear strength. A high brittleness number (> 10) represents large size relative to fracture process zone (FPZ) size and the implications of LEFM (Linear Elastic Fracture Mechanics). A low brittleness number (e.g. 0.1) represents small sample size and primarily plastic response. An intermediate value (e.g. 5) implies non-linear fracture mechanics with intermediate relative size. The calculations also implied effective values for the modulus and the critical shear fracture toughness as functions of the brittleness number. The results showed that the effective mode II fracture energy may vary by two orders of magnitude for alpine snow with median values ranging from 0.08 N/m (non-linear) to 0.18 N/m (LEFM) for median slab density around 200 kg/m3. Schulson and Duval (2009) estimated the fracture energy of solid ice (mode I) to be about 0.22-1 N/m which yields rough theoretical limits of about 0.05- 0.2 N/m for density 200 kg/m3 when the ice volume fraction is accounted for. Mode I results from lab tests (Sigrist, 2006) gave 0.1 N/m (200 kg/m3). The median effective mode II shear fracture toughness was calculated between 0.31 to 0.35 kPa(m)1/2 for the avalanche data. All the fracture energy results are much lower than previously calculated from propagation saw tests (PST) results for a weak layer collapse model (1.3 N/m) (Schweizer et al., 2011). The differences are related to model assumptions and estimates of the effective slab modulus. The calculations in this paper apply to quasi-static deformation and mode II weak layer fracture whereas the weak layer collapse model is more appropriate for dynamic conditions which follow fracture initiation (McClung and Borstad, 2012). References: Bažant, Z.P. et al. (2003) Size effect law and fracture mechanics of the triggering of dry snow slab avalanches, J. Geophys. Res. 108(B2): 2119, doi:10.1029/2002JB))1884.2003. McClung, D.M. and C.P. Borstad (2012) Deformation and energy of dry snow slabs prior to fracture propagation, J. Glaciol. 58(209), 2012 doi:10.3189/2012JoG11J009. Schulson, E.M and P. Duval (2009) Creep and fracture of ice, Cambridge University Press, 401 pp. Schweizer, J. et al. (2011) Measurements of weak layer fracture energy, Cold Reg. Sci. and Tech. 69: 139-144. Sigrist, C. (2006) Measurement of fracture mechanical properties of snow and application to dry snow slab avalanche release, Ph.D thesis: 16736, ETH, Zuerich: 139 pp.

  6. Precise Orbital and Geodetic Parameter Estimation using SLR Observations for ILRS AAC

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Oh, Hyungjik Jay; Park, Sang-Young; Lim, Hyung-Chul; Park, Chandeok

    2013-12-01

    In this study, we present results of precise orbital geodetic parameter estimation using satellite laser ranging (SLR) observations for the International Laser Ranging Service (ILRS) associate analysis center (AAC). Using normal point observations of LAGEOS-1, LAGEOS-2, ETALON-1, and ETALON-2 in SLR consolidated laser ranging data format, the NASA/ GSFC GEODYN II and SOLVE software programs were utilized for precise orbit determination (POD) and finding solutions of a terrestrial reference frame (TRF) and Earth orientation parameters (EOPs). For POD, a weekly-based orbit determination strategy was employed to process SLR observations taken from 20 weeks in 2013. For solutions of TRF and EOPs, loosely constrained scheme was used to integrate POD results of four geodetic SLR satellites. The coordinates of 11 ILRS core sites were determined and daily polar motion and polar motion rates were estimated. The root mean square (RMS) value of post-fit residuals was used for orbit quality assessment, and both the stability of TRF and the precision of EOPs by external comparison were analyzed for verification of our solutions. Results of post-fit residuals show that the RMS of the orbits of LAGEOS-1 and LAGEOS-2 are 1.20 and 1.12 cm, and those of ETALON-1 and ETALON-2 are 1.02 and 1.11 cm, respectively. The stability analysis of TRF shows that the mean value of 3D stability of the coordinates of 11 ILRS core sites is 7.0 mm. An external comparison, with respect to International Earth rotation and Reference systems Service (IERS) 08 C04 results, shows that standard deviations of polar motion XP and YP are 0.754 milliarcseconds (mas) and 0.576 mas, respectively. Our results of precise orbital and geodetic parameter estimation are reasonable and help advance research at ILRS AAC.

  7. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    PubMed

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  8. PERSEUS QC: preparing statistic data sets

    NASA Astrophysics Data System (ADS)

    Belokopytov, Vladimir; Khaliulin, Alexey; Ingerov, Andrey; Zhuk, Elena; Gertman, Isaac; Zodiatis, George; Nikolaidis, Marios; Nikolaidis, Andreas; Stylianou, Stavros

    2017-09-01

    The Desktop Oceanographic Data Processing Module was developed for visual analysis of interdisciplinary cruise measurements. The program provides the possibility of data selection based on different criteria, map plotting, sea horizontal sections, and sea depth vertical profiles. The data selection in the area of interest can be specified according to a set of different physical and chemical parameters complimented by additional parameters, such as the cruise number, ship name, and time period. The visual analysis of a set of vertical profiles in the selected area allows to determine the quality of the data, their location and the time of the in-situ measurements and to exclude any questionable data from the statistical analysis. For each selected set of profiles, the average vertical profile, the minimal and maximal values of the parameter under examination and the root mean square (r.m.s.) are estimated. These estimates are compared with the parameter ranges, set for each sub-region by MEDAR/MEDATLAS-II and SeaDataNet2 projects. In the framework of the PERSEUS project, certain parameters which lacked a range were calculated from scratch, while some of the previously used ranges were re-defined using more comprehensive data sets based on SeaDataNet2, SESAME and PERSEUS projects. In some cases we have used additional sub- regions to redefine the ranges ore precisely. The recalculated ranges are used to improve the PERSEUS Data Quality Control.

  9. On parametrized cold dense matter equation-of-state inference

    NASA Astrophysics Data System (ADS)

    Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.

    2018-07-01

    Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.

  10. On parametrised cold dense matter equation of state inference

    NASA Astrophysics Data System (ADS)

    Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.

    2018-04-01

    Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.

  11. FR II radio galaxies at low frequencies - I. Morphology, magnetic field strength and energetics.

    PubMed

    Harwood, Jeremy J; Croston, Judith H; Intema, Huib T; Stewart, Adam J; Ineson, Judith; Hardcastle, Martin J; Godfrey, Leith; Best, Philip; Brienza, Marisa; Heesen, Volker; Mahony, Elizabeth K; Morganti, Raffaella; Murgia, Matteo; Orrú, Emanuela; Röttgering, Huub; Shulevski, Aleksandar; Wise, Michael W

    2016-06-01

    Due to their steep spectra, low-frequency observations of Fanaroff-Riley type II (FR II) radio galaxies potentially provide key insights in to the morphology, energetics and spectrum of these powerful radio sources. However, limitations imposed by the previous generation of radio interferometers at metre wavelengths have meant that this region of parameter space remains largely unexplored. In this paper, the first in a series examining FR IIs at low frequencies, we use LOFAR (LOw Frequency ARray) observations between 50 and 160 MHz, along with complementary archival radio and X-ray data, to explore the properties of two FR II sources, 3C 452 and 3C 223. We find that the morphology of 3C 452 is that of a standard FR II rather than of a double-double radio galaxy as had previously been suggested, with no remnant emission being observed beyond the active lobes. We find that the low-frequency integrated spectra of both sources are much steeper than expected based on traditional assumptions and, using synchrotron/inverse-Compton model fitting, show that the total energy content of the lobes is greater than previous estimates by a factor of around 5 for 3C 452 and 2 for 3C 223. We go on to discuss possible causes of these steeper-than-expected spectra and provide revised estimates of the internal pressures and magnetic field strengths for the intrinsically steep case. We find that the ratio between the equipartition magnetic field strengths and those derived through synchrotron/inverse-Compton model fitting remains consistent with previous findings and show that the observed departure from equipartition may in some cases provide a solution to the spectral versus dynamical age disparity.

  12. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddia, Francesco; Sollerman, J.; Fremling, C.

    Context. Supernova (SN) 1987A was a peculiar hydrogen-rich event with a long-rising (~84 d) light curve, stemming from the explosion of a compact blue supergiant star. Only a few similar events have been presented in the literature in recent decades. Aims. We present new data for a sample of six long-rising Type II SNe (SNe II), three of which were discovered and observed by the Palomar Transient Factory (PTF) and three observed by the Caltech Core-Collapse Project (CCCP). Our aim is to enlarge this small family of long-rising SNe II, characterizing their differences in terms of progenitor and explosion parameters.more » We also study the metallicity of their environments. Methods. Optical light curves, spectra, and host-galaxy properties of these SNe are presented and analyzed. Detailed comparisons with known SN 1987A-like events in the literature are shown, with particular emphasis on the absolute magnitudes, colors, expansion velocities, and host-galaxy metallicities. Bolometric properties are derived from the multiband light curves. By modeling the early-time emission with scaling relations derived from the SuperNova Explosion Code (SNEC) models of MESA progenitor stars, we estimate the progenitor radii of these transients. The modeling of the bolometric light curves also allows us to estimate other progenitor and explosion parameters, such as the ejected 56Ni mass, the explosion energy, and the ejecta mass. Results. We present PTF12kso, a long-rising SN II that is estimated to have the largest amount of ejected 56Ni mass measured for this class. PTF09gpn and PTF12kso are found at the lowest host metallicities observed for this SN group. The variety of early light-curve luminosities depends on the wide range of progenitor radii of these SNe, from a few tens of R ⊙ (SN 2005ci) up to thousands (SN 2004ek) with some intermediate cases between 100 R ⊙ (PTF09gpn) and 300 R ⊙ (SN 2004em). Conclusions. We confirm that long-rising SNe II with light-curve shapes closely resembling that of SN 1987A generally arise from blue supergiant (BSG) stars. However, some of them, such as SN 2004em, likely have progenitors with larger radii (~300 R ⊙, typical of yellow supergiants) and can thus be regarded as intermediate cases between normal SNe IIP and SN 1987A-like SNe. Some extended red supergiant (RSG) stars such as the progenitor of SN 2004ek can also produce long-rising SNe II if they synthesized a large amount of 56Ni in the explosion. Lastly, low host metallicity is confirmed as a characteristic of the SNe arising from compact BSG stars.« less

  14. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  15. Normalized vertical ice mass flux profiles from vertically pointing 8-mm-wavelength Doppler radar

    NASA Technical Reports Server (NTRS)

    Orr, Brad W.; Kropfli, Robert A.

    1993-01-01

    During the FIRE 2 (First International Satellite Cloud Climatology Project Regional Experiment) project, NOAA's Wave Propagation Laboratory (WPL) operated its 8-mm wavelength Doppler radar extensively in the vertically pointing mode. This allowed for the calculation of a number of important cirrus cloud parameters, including cloud boundary statistics, cloud particle characteristic sizes and concentrations, and ice mass content (imc). The flux of imc, or, alternatively, ice mass flux (imf), is also an important parameter of a cirrus cloud system. Ice mass flux is important in the vertical redistribution of water substance and thus, in part, determines the cloud evolution. It is important for the development of cloud parameterizations to be able to define the essential physical characteristics of large populations of clouds in the simplest possible way. One method would be to normalize profiles of observed cloud properties, such as those mentioned above, in ways similar to those used in the convective boundary layer. The height then scales from 0.0 at cloud base to 1.0 at cloud top, and the measured cloud parameter scales by its maximum value so that all normalized profiles have 1.0 as their maximum value. The goal is that there will be a 'universal' shape to profiles of the normalized data. This idea was applied to estimates of imf calculated from data obtained by the WPL cloud radar during FIRE II. Other quantities such as median particle diameter, concentration, and ice mass content can also be estimated with this radar, and we expect to also examine normalized profiles of these quantities in time for the 1993 FIRE II meeting.

  16. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  17. Dispersive estimates for rational symbols and local well-posedness of the nonzero energy NV equation. II

    NASA Astrophysics Data System (ADS)

    Kazeykina, Anna; Muñoz, Claudio

    2018-04-01

    We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.

  18. Counting defects in an instantaneous quench.

    PubMed

    Ibaceta, D; Calzetta, E

    1999-09-01

    We consider the formation of defects in a nonequilibrium second-order phase transition induced by an instantaneous quench to zero temperature in a type II superconductor. We perform a full nonlinear simulation where we follow the evolution in time of the local order parameter field. We determine how far into the phase transition theoretical estimates of the defect density based on the Gaussian approximation yield a reliable prediction for the actual density. We also characterize quantitatively some aspects of the out of equilibrium phase transition.

  19. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  20. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel II. Distribution functions and moments.

    PubMed

    Langenbucher, Frieder

    2003-01-01

    MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.

  1. Modeling and Bayesian Parameter Estimation for Shape Memory Alloy Bending Actuators

    DTIC Science & Technology

    2012-02-01

    prosthetic hand,” Technology and Health Care 10, 91–106 (2002). 4. Hartl , D., Lagoudas, D., Calkins, F., and Mabe , J., “Use of a ni60ti shape memory...alloy for active jet engine chevron application: I. thermomechanical characterization,” Smart Materials and Structures 19, 1–14 (2010). 5. Hartl , D...Lagoudas, D., Calkins, F., and Mabe , J., “Use of a ni60ti shape memory alloy for active jet engine chevron application: II. experimentally validated

  2. Forcing Regression through a Given Point Using Any Familiar Computational Routine.

    DTIC Science & Technology

    1983-03-01

    a linear model , Y =a + OX + e ( Model I) then adopt the principle of least squares; and use sample data to estimate the unknown parameters, a and 8...has an expected value of zero indicates that the "average" response is considered linear . If c varies widely, Model I, though conceptually correct, may...relationship is linear from the maximum observed x to x - a, then Model II should be used. To pro- ceed with the customary evaluation of Model I would be

  3. The Strings of Eta Carina: The HST/STIS Spectra and [Ca II

    NASA Technical Reports Server (NTRS)

    Melendez, M. B.; Gull, T. R.; Bautista, M. A.; Badnell, N. R.

    2006-01-01

    Long linear, filamentary ejecta, are found to move at very high velocity external to the Homunculus, the circumstellar hourglass-shaped ejecta surrounding Eta Carinae. The origin of the strings is a puzzle. As an example, the Weigelt Blobs have N at 10X solar and C, O at 0.01X solar abundance, along with He/H significantly enhanced. This abundance pattern is evidence for extreme CNO-processing. Similarly, the Strontium Filament has Ti/Ni at 100X solar, presumably due to the lack of oxygen to form Ti-oxide precipitates onto dust grains. We have obtained 2-D spectra with the HST/STIS of the Strontium Filament and a portion of a string. These deep spectral exposures, at moderate dispersion, span much of the near red spectral region from 5000 to 9000A. We have identified twelve emission lines in these spectra with proper velocities and spatial structure of this string and obtained line ratios for [Ca II] (7293/7325A) and [Fe Ill (7157/8619A) which are useful for determining physical conditions in this nebulosity. In an attempt to use the [Ca II] ratio to determine the physical parameters, and ultimately the abundances in the strings, we have constructed a statistical equilibrium model for Ca II , including radiative and collisional rates. These results incorporate our newly calculated atomic data for levels n = 3,4,5 and 6 configurations of Ca II. The aim is to compute the [Ca II] line ratios and use them as a diagnostic of the physical parameters. Using the [Fe II] ratio we find that for Te=10,000 K, the electron density is Ne approx.10(exp 6)/cu cm. We plan to use the [Ca II] ratio to confirm this result. Then, we will extend the use of this multilevel model Ca II atom to study the physical conditions of the Strontium filament where eight lines of Ca II, both allowed and forbidden, had been identified. With the physical conditions determined, we will be able to derive reliable estimates for the gas phase abundances in the strings.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  6. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  7. Escherichia coli survival in, and release from, white-tailed deer feces.

    PubMed

    Guber, Andrey K; Fry, Jessica; Ives, Rebecca L; Rose, Joan B

    2015-02-01

    White-tailed deer are an important reservoir for pathogens that can contribute a large portion of microbial pollution in fragmented agricultural and forest landscapes. The scarcity of experimental data on survival of microorganisms in and release from deer feces makes prediction of their fate and transport less reliable and development of efficient strategies for environment protection more difficult. The goal of this study was to estimate parameters for modeling Escherichia coli survival in and release from deer (Odocoileus virginianus) feces. Our objectives were as follows: (i) to measure survival of E. coli in deer pellets at different temperatures, (ii) to measure kinetics of E. coli release from deer pellets at different rainfall intensities, and (iii) to estimate parameters of models describing survival and release of microorganisms from deer feces. Laboratory experiments were conducted to study E. coli survival in deer pellets at three temperatures and to estimate parameters of Chick's exponential model with temperature correction based on the Arrhenius equation. Kinetics of E. coli release from deer pellets were measured at two rainfall intensities and used to derive the parameters of Bradford-Schijven model of bacterial release. The results showed that parameters of the survival and release models obtained for E. coli in this study substantially differed from those obtained by using other source materials, e.g., feces of domestic animals and manures. This emphasizes the necessity of comprehensive studies of survival of naturally occurring populations of microorganisms in and release from wildlife animal feces in order to achieve better predictions of microbial fate and transport in fragmented agricultural and forest landscapes. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  8. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  9. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Genetic variability and heritability of chlorophyll a fluorescence parameters in Scots pine (Pinus sylvestris L.).

    PubMed

    Čepl, Jaroslav; Holá, Dana; Stejskal, Jan; Korecký, Jiří; Kočová, Marie; Lhotáková, Zuzana; Tomášková, Ivana; Palovská, Markéta; Rothová, Olga; Whetten, Ross W; Kaňák, Jan; Albrechtová, Jana; Lstibůrek, Milan

    2016-07-01

    Current knowledge of the genetic mechanisms underlying the inheritance of photosynthetic activity in forest trees is generally limited, yet it is essential both for various practical forestry purposes and for better understanding of broader evolutionary mechanisms. In this study, we investigated genetic variation underlying selected chlorophyll a fluorescence (ChlF) parameters in structured populations of Scots pine (Pinus sylvestris L.) grown on two sites under non-stress conditions. These parameters were derived from the OJIP part of the ChlF kinetics curve and characterize individual parts of primary photosynthetic processes associated, for example, with the exciton trapping by light-harvesting antennae, energy utilization in photosystem II (PSII) reaction centers (RCs) and its transfer further down the photosynthetic electron-transport chain. An additive relationship matrix was estimated based on pedigree reconstruction, utilizing a set of highly polymorphic single sequence repeat markers. Variance decomposition was conducted using the animal genetic evaluation mixed-linear model. The majority of ChlF parameters in the analyzed pine populations showed significant additive genetic variation. Statistically significant heritability estimates were obtained for most ChlF indices, with the exception of DI0/RC, φD0 and φP0 (Fv/Fm) parameters. Estimated heritabilities varied around the value of 0.15 with the maximal value of 0.23 in the ET0/RC parameter, which indicates electron-transport flux from QA to QB per PSII RC. No significant correlation was found between these indices and selected growth traits. Moreover, no genotype × environment interaction (G × E) was detected, i.e., no differences in genotypes' performance between sites. The absence of significant G × E in our study is interesting, given the relatively low heritability found for the majority of parameters analyzed. Therefore, we infer that polygenic variability of these indices is selectively neutral. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Charge relaxation and dynamics in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2006-08-01

    Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.

  12. Recognition and characterization of hierarchical interstellar structure. II - Structure tree statistics

    NASA Technical Reports Server (NTRS)

    Houlahan, Padraig; Scalo, John

    1992-01-01

    A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.

  13. Evaluating the impact of lower resolutions of digital elevation model on rainfall-runoff modeling for ungauged catchments.

    PubMed

    Ghumman, Abul Razzaq; Al-Salamah, Ibrahim Saleh; AlSaleem, Saleem Saleh; Haider, Husnain

    2017-02-01

    Geomorphological instantaneous unit hydrograph (GIUH) usually uses geomorphologic parameters of catchment estimated from digital elevation model (DEM) for rainfall-runoff modeling of ungauged watersheds with limited data. Higher resolutions (e.g., 5 or 10 m) of DEM play an important role in the accuracy of rainfall-runoff models; however, such resolutions are expansive to obtain and require much greater efforts and time for preparation of inputs. In this research, a modeling framework is developed to evaluate the impact of lower resolutions (i.e., 30 and 90 m) of DEM on the accuracy of Clark GIUH model. Observed rainfall-runoff data of a 202-km 2 catchment in a semiarid region was used to develop direct runoff hydrographs for nine rainfall events. Geographical information system was used to process both the DEMs. Model accuracy and errors were estimated by comparing the model results with the observed data. The study found (i) high model efficiencies greater than 90% for both the resolutions, and (ii) that the efficiency of Clark GIUH model does not significantly increase by enhancing the resolution of the DEM from 90 to 30 m. Thus, it is feasible to use lower resolutions (i.e., 90 m) of DEM in the estimation of peak runoff in ungauged catchments with relatively less efforts. Through sensitivity analysis (Monte Carlo simulations), the kinematic wave parameter and stream length ratio are found to be the most significant parameters in velocity and peak flow estimations, respectively; thus, they need to be carefully estimated for calculation of direct runoff in ungauged watersheds using Clark GIUH model.

  14. Monochloramine Disinfection Kinetics of Nitrosomonas europaea by Propidium Monoazide Quantitative PCR and Live/Dead BacLight Methods▿

    PubMed Central

    Wahman, David G.; Wulfeck-Kleier, Karen A.; Pressman, Jonathan G.

    2009-01-01

    Monochloramine disinfection kinetics were determined for the pure-culture ammonia-oxidizing bacterium Nitrosomonas europaea (ATCC 19718) by two culture-independent methods, namely, Live/Dead BacLight (LD) and propidium monoazide quantitative PCR (PMA-qPCR). Both methods were first verified with mixtures of heat-killed (nonviable) and non-heat-killed (viable) cells before a series of batch disinfection experiments with stationary-phase cultures (batch grown for 7 days) at pH 8.0, 25°C, and 5, 10, and 20 mg Cl2/liter monochloramine. Two data sets were generated based on the viability method used, either (i) LD or (ii) PMA-qPCR. These two data sets were used to estimate kinetic parameters for the delayed Chick-Watson disinfection model through a Bayesian analysis implemented in WinBUGS. This analysis provided parameter estimates of 490 mg Cl2-min/liter for the lag coefficient (b) and 1.6 × 10−3 to 4.0 × 10−3 liter/mg Cl2-min for the Chick-Watson disinfection rate constant (k). While estimates of b were similar for both data sets, the LD data set resulted in a greater k estimate than that obtained with the PMA-qPCR data set, implying that the PMA-qPCR viability measure was more conservative than LD. For N. europaea, the lag phase was not previously reported for culture-independent methods and may have implications for nitrification in drinking water distribution systems. This is the first published application of a PMA-qPCR method for disinfection kinetic model parameter estimation as well as its application to N. europaea or monochloramine. Ultimately, this PMA-qPCR method will allow evaluation of monochloramine disinfection kinetics for mixed-culture bacteria in drinking water distribution systems. PMID:19561179

  15. A new method for predicting response in complex linear systems. II. [under random or deterministic steady state excitation

    NASA Technical Reports Server (NTRS)

    Bogdanoff, J. L.; Kayser, K.; Krieger, W.

    1977-01-01

    The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.

  16. TOXNET: Toxicology Data Network

    MedlinePlus

    ... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...

  17. Effect of gamma-irradiation on thermal decomposition kinetics, X-ray diffraction pattern and spectral properties of tris(1,2-diaminoethane)nickel(II)sulphate

    NASA Astrophysics Data System (ADS)

    Jayashri, T. A.; Krishnan, G.; Rema Rani, N.

    2014-12-01

    Tris(1,2-diaminoethane)nickel(II)sulphate was prepared, and characterised by various chemical and spectral techniques. The sample was irradiated with 60Co gamma rays for varying doses. Sulphite ion and ammonia were detected and estimated in the irradiated samples. Non-isothermal decomposition kinetics, X-ray diffraction pattern, Fourier transform infrared spectroscopy, electronic, fast atom bombardment mass spectra, and surface morphology of the complex were studied before and after irradiation. Kinetic parameters were evaluated by integral, differential, and approximation methods. Irradiation enhanced thermal decomposition, lowering thermal and kinetic parameters. The mechanism of decomposition is controlled by R3 function. From X-ray diffraction studies, change in lattice parameters and subsequent changes in unit cell volume and average crystallite size were observed. Both unirradiated and irradiated samples of the complex belong to trigonal crystal system. Decrease in the intensity of the peaks was observed in the infrared spectra of irradiated samples. Electronic spectral studies revealed that the M-L interaction is unaffected by irradiation. Mass spectral studies showed that the fragmentation patterns of the unirradiated and irradiated samples are similar. The additional fragment with m/z 256 found in the irradiated sample is attributed to S8+. Surface morphology of the complex changed upon irradiation.

  18. A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas

    NASA Astrophysics Data System (ADS)

    Gudehus, D. H.

    2001-05-01

    A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.

  19. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  20. Vaccine approaches to malaria control and elimination: Insights from mathematical models.

    PubMed

    White, Michael T; Verity, Robert; Churcher, Thomas S; Ghani, Azra C

    2015-12-22

    A licensed malaria vaccine would provide a valuable new tool for malaria control and elimination efforts. Several candidate vaccines targeting different stages of the malaria parasite's lifecycle are currently under development, with one candidate, RTS,S/AS01 for the prevention of Plasmodium falciparum infection, having recently completed Phase III trials. Predicting the public health impact of a candidate malaria vaccine requires using clinical trial data to estimate the vaccine's efficacy profile--the initial efficacy following vaccination and the pattern of waning of efficacy over time. With an estimated vaccine efficacy profile, the effects of vaccination on malaria transmission can be simulated with the aid of mathematical models. Here, we provide an overview of methods for estimating the vaccine efficacy profiles of pre-erythrocytic vaccines and transmission-blocking vaccines from clinical trial data. In the case of RTS,S/AS01, model estimates from Phase II clinical trial data indicate a bi-phasic exponential profile of efficacy against infection, with efficacy waning rapidly in the first 6 months after vaccination followed by a slower rate of waning over the next 4 years. Transmission-blocking vaccines have yet to be tested in large-scale Phase II or Phase III clinical trials so we review ongoing work investigating how a clinical trial might be designed to ensure that vaccine efficacy can be estimated with sufficient statistical power. Finally, we demonstrate how parameters estimated from clinical trials can be used to predict the impact of vaccination campaigns on malaria using a mathematical model of malaria transmission. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Uncertainty and the Social Cost of Methane Using Bayesian Constrained Climate Models

    NASA Astrophysics Data System (ADS)

    Errickson, F. C.; Anthoff, D.; Keller, K.

    2016-12-01

    Social cost estimates of greenhouse gases are important for the design of sound climate policies and are also plagued by uncertainty. One major source of uncertainty stems from the simplified representation of the climate system used in the integrated assessment models that provide these social cost estimates. We explore how uncertainty over the social cost of methane varies with the way physical processes and feedbacks in the methane cycle are modeled by (i) coupling three different methane models to a simple climate model, (ii) using MCMC to perform a Bayesian calibration of the three coupled climate models that simulates direct sampling from the joint posterior probability density function (pdf) of model parameters, and (iii) producing probabilistic climate projections that are then used to calculate the Social Cost of Methane (SCM) with the DICE and FUND integrated assessment models. We find that including a temperature feedback in the methane cycle acts as an additional constraint during the calibration process and results in a correlation between the tropospheric lifetime of methane and several climate model parameters. This correlation is not seen in the models lacking this feedback. Several of the estimated marginal pdfs of the model parameters also exhibit different distributional shapes and expected values depending on the methane model used. As a result, probabilistic projections of the climate system out to the year 2300 exhibit different levels of uncertainty and magnitudes of warming for each of the three models under an RCP8.5 scenario. We find these differences in climate projections result in differences in the distributions and expected values for our estimates of the SCM. We also examine uncertainty about the SCM by performing a Monte Carlo analysis using a distribution for the climate sensitivity while holding all other climate model parameters constant. Our SCM estimates using the Bayesian calibration are lower and exhibit less uncertainty about extremely high values in the right tail of the distribution compared to the Monte Carlo approach. This finding has important climate policy implications and suggests previous work that accounts for climate model uncertainty by only varying the climate sensitivity parameter may overestimate the SCM.

  2. VizieR Online Data Catalog: Formation of MW halo and its dwarf satellites (Mashonkina+, 2017)

    NASA Astrophysics Data System (ADS)

    Mashonkina, L.; Jablonka, P.; Pakhomov, Yu; Sitnova, T.; North, P.

    2017-04-01

    Tables A.1 and A.2 from the article are presented. The first table contains atomic parameters of FeI/II and TiI/II lines. The second atmospheric parameters and FeI/II, TiI/II nLTE abundances. (2 data files).

  3. Designing a Pediatric Study for an Antimalarial Drug by Using Information from Adults

    PubMed Central

    Jullien, Vincent; Samson, Adeline; Guedj, Jérémie; Kiechel, Jean-René; Zohar, Sarah; Comets, Emmanuelle

    2015-01-01

    The objectives of this study were to design a pharmacokinetic (PK) study by using information about adults and evaluate the robustness of the recommended design through a case study of mefloquine. PK data about adults and children were available from two different randomized studies of the treatment of malaria with the same artesunate-mefloquine combination regimen. A recommended design for pediatric studies of mefloquine was optimized on the basis of an extrapolated model built from adult data through the following approach. (i) An adult PK model was built, and parameters were estimated by using the stochastic approximation expectation-maximization algorithm. (ii) Pediatric PK parameters were then obtained by adding allometry and maturation to the adult model. (iii) A D-optimal design for children was obtained with PFIM by assuming the extrapolated design. Finally, the robustness of the recommended design was evaluated in terms of the relative bias and relative standard errors (RSE) of the parameters in a simulation study with four different models and was compared to the empirical design used for the pediatric study. Combining PK modeling, extrapolation, and design optimization led to a design for children with five sampling times. PK parameters were well estimated by this design with few RSE. Although the extrapolated model did not predict the observed mefloquine concentrations in children very accurately, it allowed precise and unbiased estimates across various model assumptions, contrary to the empirical design. Using information from adult studies combined with allometry and maturation can help provide robust designs for pediatric studies. PMID:26711749

  4. Emittance and lifetime measurement with damping wigglers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G. M.; Shaftan, T., E-mail: shaftan@bnl.gov; Cheng, W. X.

    National Synchrotron Light Source II (NSLS-II) is a new third-generation storage ring light source at Brookhaven National Laboratory. The storage ring design calls for small horizontal emittance (<1 nm-rad) and diffraction-limited vertical emittance at 12 keV (8 pm-rad). Achieving low value of the beam size will enable novel user experiments with nm-range spatial and meV-energy resolution. The high-brightness NSLS-II lattice has been realized by implementing 30-cell double bend achromatic cells producing the horizontal emittance of 2 nm rad and then halving it further by using several Damping Wigglers (DWs). This paper is focused on characterization of the DW effects inmore » the storage ring performance, namely, on reduction of the beam emittance, and corresponding changes in the energy spread and beam lifetime. The relevant beam parameters have been measured by the X-ray pinhole camera, beam position monitors, beam filling pattern monitor, and current transformers. In this paper, we compare the measured results of the beam performance with analytic estimates for the complement of the 3 DWs installed at the NSLS-II.« less

  5. Raman Microspectroscopic Mapping with Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) Applied to the High-Pressure Polymorph of Titanium Dioxide, TiO2-II.

    PubMed

    Smith, Joseph P; Smith, Frank C; Ottaway, Joshua; Krull-Davatzes, Alexandra E; Simonson, Bruce M; Glass, Billy P; Booksh, Karl S

    2017-08-01

    The high-pressure, α-PbO 2 -structured polymorph of titanium dioxide (TiO 2 -II) was recently identified in micrometer-sized grains recovered from four Neoarchean spherule layers deposited between ∼2.65 and ∼2.54 billion years ago. Several lines of evidence support the interpretation that these layers represent distal impact ejecta layers. The presence of shock-induced TiO 2 -II provides physical evidence to further support an impact origin for these spherule layers. Detailed characterization of the distribution of TiO 2 -II in these grains may be useful for correlating the layers, estimating the paleodistances of the layers from their source craters, and providing insight into the formation of the TiO 2 -II. Here we report the investigation of TiO 2 -II-bearing grains from these four spherule layers using multivariate curve resolution-alternating least squares (MCR-ALS) applied to Raman microspectroscopic mapping. Raman spectra provide evidence of grains consisting primarily of rutile (TiO 2 ) and TiO 2 -II, as shown by Raman bands at 174 cm -1 (TiO 2 -II), 426 cm -1 (TiO 2 -II), 443 cm -1 (rutile), and 610 cm -1 (rutile). Principal component analysis (PCA) yielded a predominantly three-phase system comprised of rutile, TiO 2 -II, and substrate-adhesive epoxy. Scanning electron microscopy (SEM) suggests heterogeneous grains containing polydispersed micrometer- and submicrometer-sized particles. Multivariate curve resolution-alternating least squares applied to the Raman microspectroscopic mapping yielded up to five distinct chemical components: three phases of TiO 2 (rutile, TiO 2 -II, and anatase), quartz (SiO 2 ), and substrate-adhesive epoxy. Spectral profiles and spatially resolved chemical maps of the pure chemical components were generated using MCR-ALS applied to the Raman microspectroscopic maps. The spatial resolution of the Raman microspectroscopic maps was enhanced in comparable, cost-effective analysis times by limiting spectral resolution and optimizing spectral acquisition parameters. Using the resolved spectra of TiO 2 -II generated from MCR-ALS analysis, a Raman spectrum for pure TiO 2 -II was estimated to further facilitate its identification.

  6. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  7. Evaluation of solar Type II radio burst estimates of initial solar wind shock speed using a kinematic model of the solar wind on the April 2001 solar event swarm

    NASA Astrophysics Data System (ADS)

    Sun, W.; Dryer, M.; Fry, C. D.; Deehr, C. S.; Smith, Z.; Akasofu, S.-I.; Kartalev, M. D.; Grigorov, K. G.

    2002-04-01

    We compare simulation results of real time shock arrival time prediction with observations by the ACE satellite for a series of solar flares/coronal mass ejections which took place between 28 March and 18 April, 2001 on the basis of the Hakamada-Akasofu-Fry, version 2 (HAFv.2) model. It is found, via an ex post facto calculation, that the initial speed of shock waves as an input parameter of the modeling is crucial for the agreement between the observation and the simulation. The initial speed determined by metric Type II radio burst observations must be substantially reduced (30 percent in average) for most high-speed shock waves.

  8. Synthesis and physicochemical properties of bis(L-asparaginato) zinc(II): A promising new semiorganic crystal with high laser damage threshold for shorter wavelength generation

    NASA Astrophysics Data System (ADS)

    Subhashini, R.; Arjunan, S.

    2018-05-01

    An exceedingly apparent nonlinear semiorganic optical crystals of bis(L-asparaginato)zinc(II) [BLAZ], was synthesized by a traditional slow evaporation solution growth technique. The cell parameters were estimated from single crystal X-ray diffraction analysis. Spectroscopic study substantiates the presence of functional groups. The UV spectrum shows the sustenance of wide transparency window and several optical constants, such as extinction coefficient (K), refractive index, optical conductivity and electric susceptibility with real and imaginary parts of dielectric constant were calculated using the transmittance data. The fluorescence emission spectrum of the crystal pronounces red emission. The laser induced surface damage threshold of the crystal was measured using Nd:YAG laser. The output intensity of second harmonic generation was estimated using the Kurtz and Perry powder method. The hardness stability was investigated by Vickers microhardness test. The decomposition and thermal stability of the compound were scrutinized by TGA-DSC studies. Dielectric studies were carried out to anatomize the electrical properties of the crystal. SEM analysis reveals the existence of minute crystallites on the growth surface.

  9. Generic NICA-Donnan model parameters for metal-ion binding by humic substances.

    PubMed

    Milne, Christopher J; Kinniburgh, David G; van Riemsdijk, Willem H; Tipping, Edward

    2003-03-01

    A total of 171 datasets of literature and experimental data for metal-ion binding by fulvic and humic acids have been digitized and re-analyzed using the NICA-Donnan model. Generic parameter values have been derived that can be used for modeling in the absence of specific metalion binding measurements. These values complement the previously derived generic descriptions of proton binding. For ions where the ranges of pH, concentration, and ionic strength conditions are well covered by the available data,the generic parameters successfully describe the metalion binding behavior across a very wide range of conditions and for different humic and fulvic acids. Where published data for other metal ions are too sparse to constrain the model well, generic parameters have been estimated by interpolating trends observable in the parameter values of the well-defined data. Recommended generic NICA-Donnan model parameters are provided for 23 metal ions (Al, Am, Ba, Ca, Cd, Cm, Co, CrIII, Cu, Dy, Eu, FeII, FeIII, Hg, Mg, Mn, Ni, Pb, Sr, Thv, UVIO2, VIIIO, and Zn) for both fulvic and humic acids. These parameters probably represent the best NICA-Donnan description of metal-ion binding that can be achieved using existing data.

  10. Rapid determination of vial heat transfer parameters using tunable diode laser absorption spectroscopy (TDLAS) in response to step-changes in pressure set-point during freeze-drying.

    PubMed

    Kuu, Wei Y; Nail, Steven L; Sacha, Gregory

    2009-03-01

    The purpose of this study was to perform a rapid determination of vial heat transfer parameters, that is, the contact parameter K(cs) and the separation distance l(v), using the sublimation rate profiles measured by tunable diode laser absorption spectroscopy (TDLAS). In this study, each size of vial was filled with pure water followed by a freeze-drying cycle using a LyoStar II dryer (FTS Systems) with step-changes of the chamber pressure set-point at to 25, 50, 100, 200, 300, and 400 mTorr. K(cs) was independently determined by nonlinear parameter estimation using the sublimation rates measured at the pressure set-point of 25 mTorr. After obtaining K(cs), the l(v) value for each vial size was determined by nonlinear parameter estimation using the pooled sublimation rate profiles obtained at 25 to 400 mTorr. The vial heat transfer coefficient K(v), as a function of the chamber pressure, was readily calculated, using the obtained K(cs) and l(v) values. It is interesting to note the significant difference in K(v) of two similar types of 10 mL Schott tubing vials, primary due to the geometry of the vial-bottom, as demonstrated by the images of the contact areas of the vial-bottom. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  11. Correlation dimension and phase space contraction via extreme value theory

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Vaienti, Sandro

    2018-04-01

    We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.

  12. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study

    PubMed Central

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-01-01

    Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598

  13. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    PubMed

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.

  14. Spectral, thermal, kinetic, molecular modeling and eukaryotic DNA degradation studies for a new series of albendazole (HABZ) complexes

    NASA Astrophysics Data System (ADS)

    El-Metwaly, Nashwa M.; Refat, Moamen S.

    2011-01-01

    This work represents the elaborated investigation for the ligational behavior of the albendazole ligand through its coordination with, Cu(II), Mn(II), Ni(II), Co(II) and Cr(III) ions. Elemental analysis, molar conductance, magnetic moment, spectral studies (IR, UV-Vis and ESR) and thermogravimetric analysis (TG and DTG) have been used to characterize the isolated complexes. A deliberate comparison for the IR spectra reveals that the ligand coordinated with all mentioned metal ions by the same manner as a neutral bidentate through carbonyl of ester moiety and NH groups. The proposed chelation form for such complexes is expected through out the preparation conditions in a relatively acidic medium. The powder XRD study reflects the amorphous nature for the investigated complexes except Mn(II). The conductivity measurements reflect the non-electrolytic feature for all complexes. In comparing with the constants for the magnetic measurements as well as the electronic spectral data, the octahedral structure was proposed strongly for Cr(III) and Ni(II), the tetrahedral for Co(II) and Mn(II) complexes but the square-pyramidal for the Cu(II) one. The thermogravimetric analysis confirms the presence or absence of water molecules by any type of attachments. Also, the kinetic parameters are estimated from DTG and TG curves. ESR spectrum data for Cu(II) solid complex confirms the square-pyramidal state is the most fitted one for the coordinated structure. The albendazole ligand and its complexes are biologically investigated against two bacteria as well as their effective effect on degradation of calf thymus DNA.

  15. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  16. Long-term variations of the upper atmosphere parameters on Rome ionosonde observations and their interpretation

    NASA Astrophysics Data System (ADS)

    Perrone, Loredana; Mikhailov, Andrey; Cesaroni, Claudio; Alfonsi, Lucilla; Santis, Angelo De; Pezzopane, Michael; Scotto, Carlo

    2017-09-01

    A recently proposed self-consistent approach to the analysis of thermospheric and ionospheric long-term trends has been applied to Rome ionosonde summer noontime observations for the (1957-2015) period. This approach includes: (i) a method to extract ionospheric parameter long-term variations; (ii) a method to retrieve from observed foF1 neutral composition (O, O2, N2), exospheric temperature, Tex and the total solar EUV flux with λ < 1050 Å; and (iii) a combined analysis of the ionospheric and thermospheric parameter long-term variations using the theory of ionospheric F-layer formation. Atomic oxygen, [O] and [O]/[N2] ratio control foF1 and foF2 while neutral temperature, Tex controls hmF2 long-term variations. Noontime foF2 and foF1 long-term variations demonstrate a negative linear trend estimated over the (1962-2010) period which is mainly due to atomic oxygen decrease after ˜1990. A linear trend in (δhmF2)11y estimated over the (1962-2010) period is very small and insignificant reflecting the absence of any significant trend in neutral temperature. The retrieved neutral gas density, ρ atomic oxygen, [O] and exospheric temperature, Tex long-term variations are controlled by solar and geomagnetic activity, i.e. they have a natural origin. The residual trends estimated over the period of ˜5 solar cycles (1957-2015) are very small (<0.5% per decade) and statistically insignificant.

  17. Chromosome aberrations and cell death by ionizing radiation: Evolution of a biophysical model

    NASA Astrophysics Data System (ADS)

    Ballarini, Francesca; Carante, Mario P.

    2016-11-01

    The manuscript summarizes and discusses the various versions of a radiation damage biophysical model, implemented as a Monte Carlo simulation code, originally developed for chromosome aberrations and subsequently extended to cell death. This extended version has been called BIANCA (BIophysical ANalysis of Cell death and chromosome Aberrations). According to the basic assumptions, complex double-strand breaks (called ;Cluster Lesions;, or CLs) produce independent chromosome free-ends, mis-rejoining within a threshold distance d (or un-rejoining) leads to chromosome aberrations, and ;lethal aberrations; (i.e., dicentrics plus rings plus large deletions) lead to clonogenic cell death. The mean number of CLs per Gy and per cell is an adjustable parameter. While in BIANCA the threshold distance d was the second parameter, in a subsequent version, called BIANCA II, d has been fixed as the mean distance between two adjacent interphase chromosome territories, and a new parameter, f, has been introduced to represent the chromosome free-end un-rejoining probability. Simulated dose-response curves for chromosome aberrations and cell survival obtained by the various model versions were compared with literature experimental data. Such comparisons provided indications on some open questions, including the role of energy deposition clustering at the nm and the μm level, the probability for a chromosome free-end to remain un-rejoined, and the relationship between chromosome aberrations and cell death. Although both BIANCA and BIANCA II provided cell survival curves in general agreement with human and hamster fibroblast survival data, BIANCA II allowed for a better reproduction of dicentrics, rings and deletions considered separately. Furthermore, the approach adopted in BIANCA II for d is more consistent with estimates reported in the literature. After testing against aberration and survival data, BIANCA II was applied to investigate the depth-dependence of the radiation effectiveness for a proton SOBP used to treat eye melanoma in Catania, Italy. The survival of AG01522 cells at different depths was reproduced, and the survival of V79 cells was predicted. For both cell lines, the simulations also predicted yields of chromosome aberrations, some of which can be regarded as indicators of the risk to normal tissues.

  18. FR II radio galaxies in the Sloan Digital Sky Survey: observational facts

    NASA Astrophysics Data System (ADS)

    Kozieł-Wierzbowska, D.; Stasińska, G.

    2011-08-01

    Starting from the Cambridge Catalogues of radio sources, we have created a sample of 401 Fanaroff-Riley type II (FR II) radio sources that have counterparts in the main galaxy sample of the seventh Data release of the Sloan Digital Sky Survey (SDSS) and analyse their radio and optical properties. We find that the luminosity in the Hα line - which we argue gives a better measure of the total emission-line flux than the widely used luminosity in [O III]- is strongly correlated with the radio luminosity P1.4 GHz. We show that the absence of emission lines in about one third of our sample is likely due to a detection threshold and not to a lack of optical activity. We also find a very strong correlation between the values of LHα and P1.4 GHz when scaled by ‘MBH’, an estimate of the black hole mass. We find that the properties of FR II galaxies are mainly driven by the Eddington parameter LHα/‘MBH’ or, equivalently, P1.4 GHz/‘MBH’. Radio galaxies with hotspots are found among the ones with the highest values of P1.4 GHz/‘MBH’. Compared to classical active galactic nuclei (AGN) hosts in the main galaxy sample of the SDSS, our FR II galaxies show a larger proportion of objects with very hard ionizing radiation field and large ionization parameter. A few objects are, on the contrary, ionized by a softer radiation field. Two of them have double-peaked emission lines and deserve more attention. We find that the black hole masses and stellar masses in FR II galaxies are very closely related: ‘MBH’∝M1.13* with very little scatter. A comparison sample of line-less galaxies in the SDSS follows exactly the same relation, although the masses are, on average, smaller. This suggests that the FR II radio phenomenon occurs in normal elliptical galaxies, preferentially in the most massive ones. Although most FR II galaxies are old, some contain traces of young stellar populations. Such young populations are not seen in normal line-less galaxies, suggesting that the radio (and optical) activity in some FR II galaxies may be triggered by recent star formation. The ‘MBH’-M* relation in a comparison sample of radio-quiet AGN hosts from the SDSS is very different, suggesting that galaxies which are still forming stars are also still building their central black holes. Globally, our study indicates that, while radio and optical activity are strongly related in FR II galaxies, the features of the optical activity in FR IIs are distinct from those of the bulk of radio-quiet active galaxies. An appendix (available as Supporting Information with the online version of the article) gives the radio maps of our FR II galaxies, superimposed on the SDSS images, and the parameters derived for our analysis that were not publicly available.

  19. Data-driven Techniques to Estimate Parameters in the Homogenized Energy Model for Shape Memory Alloys

    DTIC Science & Technology

    2011-11-01

    sensor. volume 79781K. Proceedings of the SPIE 7978, 2011. [9] D.J. Hartl , D.C. Lagoudas, F.T. Calkins, and J.H. Mabe . Use of a ni60ti shape memory...alloy for active jet engine chevron application: I. thermomechanical characterization. Smart Materials and Structures, 19:1–14, 2010. [10] D.J. Hartl ...D.C. Lagoudas, F.T. Calkins, and J.H. Mabe . Use of a ni60ti shape memory alloy for active jet engine chevron application: II. experimentally validated

  20. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.

  1. Estimate of the soil water retention curve from the sorptivity and β parameter calculated from an upward infiltration experiment

    NASA Astrophysics Data System (ADS)

    Moret-Fernández, D.; Latorre, B.

    2017-01-01

    The water retention curve (θ(h)), which defines the relationship between the volumetric water content (θ) and the matric potential (h), is of paramount importance to characterize the hydraulic behaviour of soils. Because current methods to estimate θ(h) are, in general, tedious and time consuming, alternative procedures to determine θ(h) are needed. Using an upward infiltration curve, the main objective of this work is to present a method to determine the parameters of the van Genuchten (1980) water retention curve (α and n) from the sorptivity (S) and the β parameter defined in the 1D infiltration equation proposed by Haverkamp et al. (1994). The first specific objective is to present an equation, based on the Haverkamp et al. (1994) analysis, which allows describing an upward infiltration process. Secondary, assuming a known saturated hydraulic conductivity, Ks, calculated on a finite soil column by the Darcy's law, a numerical procedure to calculate S and β by the inverse analysis of an exfiltration curve is presented. Finally, the α and n values are numerically calculated from Ks, S and β. To accomplish the first specific objective, cumulative upward infiltration curves simulated with HYDRUS-1D for sand, loam, silt and clay soils were compared to those calculated with the proposed equation, after applying the corresponding β and S calculated from the theoretical Ks, α and n. The same curves were used to: (i) study the influence of the exfiltration time on S and β estimations, (ii) evaluate the limits of the inverse analysis, and (iii) validate the feasibility of the method to estimate α and n. Next, the θ(h) parameters estimated with the numerical method on experimental soils were compared to those obtained with pressure cells. The results showed that the upward infiltration curve could be correctly described by the modified Haverkamp et al. (1994) equation. While S was only affected by early-time exfiltration data, the β parameter had a significant influence on the long-time exfiltration curve, which accuracy increased with time. The 1D infiltration model was only suitable for β < 1.7 (sand, loam and silt). After omitting the clay soil, an excellent relationship (R2 = 0.99, p < 0.005) was observed between the theoretical α and n values of the synthetic soils and those estimated from the inverse analysis. Consistent results, with a significant relationship (p < 0.001) between the n values estimated with the pressure cell and the upward infiltration analysis, were also obtained on the experimental soils.

  2. Molecular hydrogen absorption systems in Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Balashev, S. A.; Klimenko, V. V.; Ivanchik, A. V.; Varshalovich, D. A.; Petitjean, P.; Noterdaeme, P.

    2014-05-01

    We present a systematic search for molecular hydrogen absorption systems at high redshift in quasar spectra from the Sloan Digital Sky Survey (SDSS)-II Data Release 7 and SDSS-III Data Release 9. We have selected candidates using a modified profile fitting technique taking into account that the Lyα forest can effectively mimic H2 absorption systems at the resolution of SDSS data. To estimate the confidence level of the detections, we use two methods: a Monte Carlo sampling and an analysis of control samples. The analysis of control samples allows us to define regions of the spectral quality parameter space where H2 absorption systems can be confidently identified. We find that H2 absorption systems with column densities log NH2 > 19 can be detected in only less than 3 per cent of SDSS quasar spectra. We estimate the upper limit on the detection rate of saturated H2 absorption systems (NH2 > 19) in damped Lyα (DLA) systems to be about 7 per cent. We provide a sample of 23 confident H2 absorption system candidates that would be interesting to follow up with high-resolution spectrographs. There is a 1σ r - i colour excess and non-significant AV extinction excess in quasar spectra with an H2 candidate compared to standard DLA-bearing quasar spectra. The equivalent widths of C II, Si II and Al III (but not Fe II) absorptions associated with H2 candidate DLAs are larger compared to standard DLAs. This is probably related to a larger spread in velocity of the absorption lines in the H2-bearing sample.

  3. Inverse modeling and uncertainty analysis of potential groundwater recharge to the confined semi-fossil Ohangwena II Aquifer, Namibia

    NASA Astrophysics Data System (ADS)

    Wallner, Markus; Houben, Georg; Lohe, Christoph; Quinger, Martin; Himmelsbach, Thomas

    2017-12-01

    The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.

  4. Combining Spitzer Parallax and Keck II Adaptive Optics Imaging to Measure the Mass of a Solar-like Star Orbited by a Cold Gaseous Planet Discovered by Microlensing

    NASA Astrophysics Data System (ADS)

    Beaulieu, J.-P.; Batista, V.; Bennett, D. P.; Marquette, J.-B.; Blackman, J. W.; Cole, A. A.; Coutures, C.; Danielski, C.; Dominis Prester, D.; Donatowicz, J.; Fukui, A.; Koshimoto, N.; Lončarić, K.; Morales, J. C.; Sumi, T.; Suzuki, D.; Henderson, C.; Shvartzvald, Y.; Beichman, C.

    2018-02-01

    To obtain accurate mass measurements for cold planets discovered by microlensing, it is usually necessary to combine light curve modeling with at least two lens mass–distance relations. The physical parameters of the planetary system OGLE-2014-BLG-0124L have been constrained thanks to accurate parallax effect between ground-based and simultaneous space-based Spitzer observations. Here, we resolved the source+lens star from sub-arcsecond blends in H-band using adaptive optics (AO) observations with NIRC2 mounted on Keck II telescope. We identify additional flux, coincident with the source to within 160 mas. We estimate the potential contributions to this blended light (chance-aligned star, additional companion to the lens or to the source) and find that 85% of the NIR flux is due to the lens star at H L = 16.63 ± 0.06 and K L = 16.44 ± 0.06. We combined the parallax constraint and the AO constraint to derive the physical parameters of the system. The lensing system is composed of a mid-late type G main sequence star of M L = 0.9 ± 0.05 M ⊙ located at D L = 3.5 ± 0.2 kpc in the Galactic disk. Taking the mass ratio and projected separation from the original study leads to a planet of M p = 0.65 ± 0.044 M Jupiter at 3.48 ± 0.22 au. Excellent parallax measurements from simultaneous ground-space observations have been obtained on the microlensing event OGLE-2014-BLG-0124, but it is only when they are combined with ∼30 minutes of Keck II AO observations that the physical parameters of the host star are well measured.

  5. [CIP and CAP fragments of parathormone and selected parameters of calcium-phosphate balance in patients with chronic kidney disease treated with repeated haemodialysis].

    PubMed

    Polak-Jonkisz, Dorota; Zwolińska, Danuta; Nahaczewska, Wiesława

    2010-01-01

    Chronic kidney disease (CKD) leads to bone and mineral complications, which are manifested, among others, by hyperparathyroidism, calcium-phosphate and vitamin D balance disturbances. The results of investigation assessing the usefulness of CAP/CIP ratio, (cyclase activating PTH/cyclase inactive PTH) as a marker of bone turnover and bone disturbances in this group of patients are contradictory. was to estimate the concentration of CAP and CIP of parathormone, connection with selected calcium-phosphate balance parameters and usefulness of CAP/CIP ratio to differentiate bone mineral density in patients with CKD treated with repeated haemodialysis. The study included 31 children aged 5 to 18 years. Group I - 15 haemodialysed children. Group II - 16 healthy children. The patients underwent the following serum measurements: calcium concentration (Ca), inorganic phosphate (P), 1.25-dihydroxyvitamin D, parathormone (intact PTH), and CAP, CIP were evaluated with Scantibodies Laboratory Inc test. In group I the densitometric examination was done using the Lunar DPX-L system, performing the overall bone measurement. In children from group I the average values of iPTH concentration and both CIP and CAP components were significantly elevated (p<0.05) as compared to group II. CAP/CIP ratio in group I was <1; in healthy children >1. Average concentrations of Ca and 1.25(OH)2D in serum of group I were lowered, although without statistical significance in comparison with group II. CAP/CIP ratio does not differentiate the children with bone disturbances. Densitometric examination revealed osteopenic changes in 3 children and osteoporosis in 2 children. There were no statistically significant correlations between the examined parameters. 1. The CIP/CAP ratio does not differentiate the bone mineral density status and it is not associated with biochemical parameters of calcium-phosphate metabolism. 2. This indicates its poor diagnostic utility with reference to mineralization disturbances in children with chronic kidney disease.

  6. In-vivo study for anti-hyperglycemic potential of aqueous extract of Basil seeds (Ocimum basilicum Linn) and its influence on biochemical parameters, serum electrolytes and haematological indices.

    PubMed

    Chaudhary, Sachin; Semwal, Amit; Kumar, Hitesh; Verma, Harish Chandra; Kumar, Amit

    2016-12-01

    The study introduced anti-hyperglycemic influence of aqueous extract of Ocimum basilicum seeds (AEOBS) in Streptozotocin (STZ) induced diabetic rats and estimating its potential to ameliorate altered level of biochemical parameters, serum electrolytes level and haematological indices along with its effect on body weight of treated rats. The albino rats were selected to observe oral glucose tolerance test by oral intake of aq. glucose solution (4g/kg, body weight) in normal rats and estimation of blood glucose level after administration of AEOBS at 250mg/kg, 500mg/kg and standard drug glibenclamide at 0.6mg/kg, body weight. Antidiabetic activity was evaluated in chronic study models by STZ induced diabetes in rats followed by blood glucose estimation. Chronic study model was selected to carry out further studies to evaluate the effect of AEOBS at 250mg/kg, 500mg/kg and standard drug on body weight, alterations in biochemical parameters including AST, ALT, ALP, total bilirubin and total protein, alterations in serum electrolytes like Na + , K + , Cl - , HCO 3 - along with estimation of haematological indices like red blood cells (RBC), white blood cells (WBC), hemoglobin (Hb), lymphocytes, neutrophils, eosinophils, monocytes and basophils. AEOBS significantly reduced the blood glucose level of diabetic rats at both doses. Body weight was also improved significantly. Similarly, the levels of biochemical parameters, serum electrolytes, and haematological indices were significantly ameliorated at both doses of AEOBS. The histopathological results revealed reconstitution of pancreatic islets towards normal cellular architecture in rats treated with AEOBS. The results illustrated that AEOBS have eminent antidiabetic potential in STZ effectuated diabetes in rats and can be extensively used for the treatment of diabetes mellitus-II and its associated complications including anaemia, diabetic nephropathy, liver dysfunction, and immunosuppression. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  7. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  8. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. The effect of ISM absorption on stellar activity measurements and its relevance for exoplanet studies

    NASA Astrophysics Data System (ADS)

    Fossati, L.; Marcelja, S. E.; Staab, D.; Cubillos, P. E.; France, K.; Haswell, C. A.; Ingrassia, S.; Jenkins, J. S.; Koskinen, T.; Lanza, A. F.; Redfield, S.; Youngblood, A.; Pelzmann, G.

    2017-05-01

    Past ultraviolet and optical observations of stars hosting close-in Jupiter-mass planets have shown that some of these stars present an anomalously low chromospheric activity, significantly below the basal level. For the hot Jupiter planet host WASP-13, observations have shown that the apparent lack of activity is possibly caused by absorption from the intervening interstellar medium (ISM). Inspired by this result, we study the effect of ISM absorption on activity measurements (S and log R 'HK indices) for main-sequence late-type stars. To this end, we employ synthetic stellar photospheric spectra combined with varying amounts of chromospheric emission and ISM absorption. We present the effect of ISM absorption on activity measurements by varying several instrumental (spectral resolution), stellar (projected rotational velocity, effective temperature, and chromospheric emission flux), and ISM parameters (relative velocity between stellar and ISM Ca II lines, broadening b-parameter, and Ca II column density). We find that for relative velocities between the stellar and ISM lines smaller than 30-40 km s-1 and for ISM Ca II column densities log NCaII ⪆ 12, the ISM absorption has a significant influence on activity measurements. Direct measurements and three dimensional maps of the Galactic ISM absorption indicate that an ISM Ca II column density of log NCaII = 12 is typically reached by a distance of about 100 pc along most sight lines. In particular, for a Sun-like star lying at a distance greater than 100 pc, we expect a depression (bias) in the log R'HK value larger than 0.05-0.1 dex, about the same size as the typical measurement and calibration uncertainties on this parameter. This work shows that the bias introduced by ISM absorption must always be considered when measuring activity for stars lying beyond 100 pc. We also consider the effect of multiple ISM absorption components. We discuss the relevance of this result for exoplanet studies and revise the latest results on stellar activity versus planet surface gravity correlation. We finally describe methods with which it would be possible to account for ISM absorption in activity measurements and provide a code to roughly estimate the magnitude of the bias. Correcting for the ISM absorption bias may allow one to identify the origin of the anomaly in the activity measured for some planet-hosting stars.

  10. Measurement of ocean water optical properties and seafloor reflectance with scanning hydrographic operational airborne lidar survey (SHOALS): II. Practical results and comparison with independent data

    NASA Astrophysics Data System (ADS)

    Tuell, Grady H.; Feygels, Viktor; Kopilevich, Yuri; Weidemann, Alan D.; Cunningham, A. Grant; Mani, Reza; Podoba, Vladimir; Ramnath, Vinod; Park, J. Y.; Aitken, Jen

    2005-08-01

    Estimation of water column optical properties and seafloor reflectance (532 nm) is demonstrated using recent SHOALS data collected at Fort Lauderdale, Florida (November, 2003). To facilitate this work, the first radiometric calibrations of SHOALS were performed. These calibrations permit a direct normalization of recorded data by converting digitized counts at the output of the SHOALS receivers to input optical power. For estimation of environmental parameters, this normalization is required to compensate for the logarithmic compression of the signals and the finite frequency of the bandpass of the detector/amplifier. After normalization, the SHOALS data are used to estimate the backscattering coefficient, the beam attenuation coefficient, the single-scattering albedo, the VSF asymmetry, and seafloor reflectance by fitting simulated waveforms to actual waveforms measured by the SHOALS APD and PMT receivers. The resulting estimates of these water column optical properties are compared to in-situ measurements acquired at the time of the airborne data collections. Images of green laser bottom reflectance are also presented and compared to reflectance estimated from simultaneously acquired passive spectral data.

  11. Modeling As(III) oxidation and removal with iron electrocoagulation in groundwater.

    PubMed

    Li, Lei; van Genuchten, Case M; Addy, Susan E A; Yao, Juanjuan; Gao, Naiyun; Gadgil, Ashok J

    2012-11-06

    Understanding the chemical kinetics of arsenic during electrocoagulation (EC) treatment is essential for a deeper understanding of arsenic removal using EC under a variety of operating conditions and solution compositions. We describe a highly constrained, simple chemical dynamic model of As(III) oxidation and As(III,V), Si, and P sorption for the EC system using model parameters extracted from some of our experimental results and previous studies. Our model predictions agree well with both data extracted from previous studies and our observed experimental data over a broad range of operating conditions (charge dosage rate) and solution chemistry (pH, co-occurring ions) without free model parameters. Our model provides insights into why higher pH and lower charge dosage rate (Coulombs/L/min) facilitate As(III) removal by EC and sheds light on the debate in the recent published literature regarding the mechanism of As(III) oxidation during EC. Our model also provides practically useful estimates of the minimum amount of iron required to remove 500 μg/L As(III) to <50 μg/L. Parameters measured in this work include the ratio of rate constants for Fe(II) and As(III) reactions with Fe(IV) in synthetic groundwater (k(1)/k(2) = 1.07) and the apparent rate constant of Fe(II) oxidation with dissolved oxygen at pH 7 (k(app) = 10(0.22) M(-1)s(-1)).

  12. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  13. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  14. Oil-generation kinetics for organic facies with Type-II and -IIS kerogen in the Menilite Shales of the Polish Carpathians

    USGS Publications Warehouse

    Lewan, M.D.; Kotarba, M.J.; Curtis, John B.; Wieclaw, D.; Kosakowski, P.

    2006-01-01

    The Menilite Shales (Oligocene) of the Polish Carpathians are the source of low-sulfur oils in the thrust belt and some high-sulfur oils in the Carpathian Foredeep. These oil occurrences indicate that the high-sulfur oils in the Foredeep were generated and expelled before major thrusting and the low-sulfur oils in the thrust belt were generated and expelled during or after major thrusting. Two distinct organic facies have been observed in the Menilite Shales. One organic facies has a high clastic sediment input and contains Type-II kerogen. The other organic facies has a lower clastic sediment input and contains Type-IIS kerogen. Representative samples of both organic facies were used to determine kinetic parameters for immiscible oil generation by isothermal hydrous pyrolysis and S2 generation by non-isothermal open-system pyrolysis. The derived kinetic parameters showed that timing of S2 generation was not as different between the Type-IIS and -II kerogen based on open-system pyrolysis as compared with immiscible oil generation based on hydrous pyrolysis. Applying these kinetic parameters to a burial history in the Skole unit showed that some expelled oil would have been generated from the organic facies with Type-IIS kerogen before major thrusting with the hydrous-pyrolysis kinetic parameters but not with the open-system pyrolysis kinetic parameters. The inability of open-system pyrolysis to determine earlier petroleum generation from Type-IIS kerogen is attributed to the large polar-rich bitumen component in S2 generation, rapid loss of sulfur free-radical initiators in the open system, and diminished radical selectivity and rate constant differences at higher temperatures. Hydrous-pyrolysis kinetic parameters are determined in the presence of water at lower temperatures in a closed system, which allows differentiation of bitumen and oil generation, interaction of free-radical initiators, greater radical selectivity, and more distinguishable rate constants as would occur during natural maturation. Kinetic parameters derived from hydrous pyrolysis show good correlations with one another (compensation effect) and kerogen organic-sulfur contents. These correlations allow for indirect determination of hydrous-pyrolysis kinetic parameters on the basis of the organic-sulfur mole fraction of an immature Type-II or -IIS kerogen. ?? 2006 Elsevier Inc. All rights reserved.

  15. Two-parameter model of total solar irradiance variation over the solar cycle

    NASA Technical Reports Server (NTRS)

    Pap, Judit M.; Willson, Richard C.; Donnelly, Richard F.

    1991-01-01

    Total solar irradiance measured by the SMM/ACRIM radiometer is modelled from the Photometric Sunspot Index and the Mg II core-to-wing ratio with multiple regression analysis. Considering that the formation of the Mg II line is very similar to that of the Ca II K line, the Mg II core-to-wing ratio, measured by the Nimbus-7 and NOAA9 satellites, is used as a proxy for the bright magnetic elements, including faculae and the magnetic network. It is shown that the relationship between the variations in total solar irradiance and the above solar activity indices depends upon the phase of the solar cycle. Thus, a better fit between total irradiance and its model estimates can be achieved if the irradiance models are calculated for the declining portion and minimum of solar cycle 21, and the rising portion of solar cycle 22, respectively. There is an indication that during the rising portion of solar cycle 22, similar to the maximum time of solar cycle 21, the modelled total irradiance values underestimate the measured values. This suggests that there is an asymmetry in the long-term total irradiance variability.

  16. A Liver-centric Multiscale Modeling Framework for Xenobiotics ...

    EPA Pesticide Factsheets

    We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. To validate the model, we estimated our model parameters by fi?tting serum concentrations of acetaminophen and its glucuronide and sulfate metabolites to experiments, and carried out sensitivity analysis on 35 parameters selected from three modules. Our study focuses on developing a multi-scale computational model to characterize both phase I and phase II metabolism of acetaminophen, by bridging Physiologically Based Pharmacokinetic (PBPK) modeling at the whole body level, cell movement and blood flow at the tissue level and cell signaling and drug metabolism at the sub-cellular level. This multiscale model bridges the CompuCell3D tool used by the Virtual Tissue project with the httk tool developed by the Rapid Exposure and Dosimetry project.

  17. A Test of General Relativity with MESSENGER Mission Data

    NASA Astrophysics Data System (ADS)

    Genova, A.; Mazarico, E.; Goossens, S. J.; Lemoine, F. G.; Neumann, G. A.; Nicholas, J. B.; Rowlands, D. D.; Smith, D. E.; Zuber, M. T.; Solomon, S. C.

    2016-12-01

    The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft initiated collection of scientific data from the innermost planet during its first flyby of Mercury in January 2008. After two additional Mercury flybys, MESSENGER was inserted into orbit around Mercury on 18 March 2011 and operated for more than four Earth years through 30 April 2015. Data acquired during the flyby and orbital phases have provided crucial information on the formation and evolution of Mercury. The Mercury Laser Altimeter (MLA) and the radio science system, for example, obtained geodetic observations of the topography, gravity field, orientation, and tides of Mercury, which helped constrain its surface and deep interior structure. X-band radio tracking data collected by the NASA Deep Space Network (DSN) allowed the determination of Mercury's gravity field to spherical harmonic degree and order 100, as well as refinement of the planet's obliquity and estimation of the tidal Love number k2. These geophysical parameters are derived from the range-rate observables that measure precisely the motion of the spacecraft in orbit around the planet. However, the DSN stations acquired two other kinds of radio tracking data, range and delta-differential one-way ranging, which also provided precise measurements of Mercury's ephemeris. The proximity of Mercury's orbit to the Sun leads to a significant perihelion precession, which was used by Einstein as confirmation of general relativity (GR) because of its inconsistency with the effects predicted from classical Newtonian theory. MESSENGER data allow the estimation of the GR parameterized post-Newtonian (PPN) coefficients γ and β. Furthermore, determination of Mercury's orbit also allows estimation of the gravitational parameter (GM) and the flattening (J2) of the Sun. We modified our orbit determination software, NASA GSFC's GEODYN II, to enable simultaneous orbit integration of both MESSENGER and the planet Mercury. The combined estimation of both orbits leads to a more accurate estimation of Mercury's gravity field, orientation, and tides. Results for these geophysical parameters, GM and J2 for the Sun, and the PPN parameters constitute updates for all of these quantities.

  18. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  19. The Detection Rate of Early UV Emission from Supernovae: A Dedicated Galex/PTF Survey and Calibrated Theoretical Estimates

    NASA Astrophysics Data System (ADS)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran. O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R.; Ben-Ami, Sagi; Kasliwal, Mansi M.; The ULTRASAT Science Team; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Ehud; Maoz, Dan; Trakhtenbrot, Benny; WTTH Consortium, The; Neill, James D.; Barlow, Thomas A.; Martin, Christofer D.; Gezari, Suvi; the GALEX Science Team; Arcavi, Iair; Bloom, Joshua S.; Nugent, Peter E.; Sullivan, Mark; Palomar Transient Factory, The

    2016-03-01

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R ⊙, explosion energies of 1051 erg, and ejecta masses of 10 M ⊙. Exploding blue supergiants and Wolf-Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (˜0.5 SN per deg2), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  20. Radiance And Irradiance Of The Solar HeII 304 Emission Line

    NASA Astrophysics Data System (ADS)

    McMullin, D. R.; Floyd, L. E.; Auchère, F.

    2013-12-01

    For over 17 years, EIT and the later EUVI instruments aboard SoHO and STEREO, respectively, have provided a time series of radiant images in the HeII 30.4 nm transition region and three coronal emission lines (FeIX/X, FeXII, and FeXV). While the EIT measurements were gathered from positions approximately on the Earth-Sun axis, EUVI images have been gathered at angles ranging to more than ×90 degrees in solar longitude relative the Earth-Sun axis. Using a Differential Emission Measure (DEM) model, these measurements provide a basis for estimates of the spectral irradiance for the solar spectrum of wavelengths between 15 and 50 nm at any position in the heliosphere. In particular, we generate the He 30.4 spectral irradiance in all directions in the heliosphere and examine its time series in selected directions. Such spectra are utilized for two distinct purposes. First, the photoionization rate of neutral He at each position is calculated. Neutral He is of interest because it traverses the heliopause relatively undisturbed and therefore provides a measure of isotopic parameters beyond the heliosphere. Second, we use these generate a time series of estimates of the solar spectral luminosity in the HeII 30.4 nm emission line extending from the recent past solar cycle 23 minimum into the current weak solar cycle 24 enabling an estimate of its variation over the solar cycle. Because this 30.4~nm spectral luminosity is the sum of such radiation in all directions, its time series is devoid of the 27-day solar rotation periodicity present in indices typically used to represent solar activity.

  1. Modelling of clay diagenesis using a combined approach of crystalchemistry and thermochemistry: a case study in the smectite illitization.

    NASA Astrophysics Data System (ADS)

    Geloni, Claudio; Previde Massara, Elisabetta; Di Paola, Eleonora; Ortenzi, Andrea; Gherardi, Fabrizio; Blanc, Philippe

    2017-04-01

    Diagenetic transformations occurring in clayey and arenaceous sediments is investigated in a number of hydrocarbon reservoirs with an integrated approach that combines mineralogical analysis, crystalchemistry, estimation of thermochemical parameters of clay minerals, and geochemical modelling. Because of the extremely variable crystalchemistry of clays, especially in the smectite - illite compositional range, the estimation of thermochemical parameters of site-specific clay-rich rocks is crucial to investigate water-rock equilibria and to predict mineralogical evolutionary patterns at the clay-sandstone interface. The task of estimating the thermochemical properties of clay minerals and predicting diagenetic reactions in natural reservoirs is accomplished through the implementation of an informatized, procedure (IP) that consists of: (i) laboratory analysis of smectite, illite and mixed layers (I/S) for the determination of their textural characteristics and chemical composition; (ii) estimation of the thermodynamic and structural parameters (enthalpy, entropy, and free energy of formation, thermal capacity, molar volume, molar weight) with a MS Excel tool (XLS) specifically developed at the French Bureau of Geological and Mining Researches (BRGM); (iii) usage of the SUPCRT (Johnson et al., 1992) software package (thereinafter, SSP) to derive log K values to be incorporated in thermodynamic databases of the standard geochemical codes; (iv) check of the consistency of the stability domains calculated with these log K values with relevant predominance diagrams; (v) final application of geochemical and reactive transport models to investigate the reactive mechanisms under different thermal conditions (40-150°C). All the simulations consider pore waters having roughly the same chemical composition of reservoir pore waters, and are performed with The Geochemist Workbench (Bethke and Yeakel, 2015), PHREEQC (Parkhurst, 1999) and TOUGHREACT (Xu, 2006). The overall procedure benefits from: (i) (minor) improvements of the I/O structure of the SSP; (ii) the development of a suite of python scripts to automate the steps needed to augment the thermodynamic database by integrating the external information provided by potential users with the XLS tool and the SSP; (iii) the creation of specific outputs to allow for more convenient handling and inspection of computed parameters of the thermodynamic database. A case study focused on non-isothermal smectite-illite transformation is presented to show the capability of our numerical models to account for clay compaction under 1D geometry conditions. This model considers fluid flow driven by the compaction of a clay layer, and chemistry-fluid flow mutual feedback with the underlying sandstone during the advancement of the diagenesis. Due to this complex interaction, as a result of the smectite-illite transformation in the clays, significant quartz cementation affects the sandstone adjacent to the compacting clay.

  2. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  3. Estimation of Stability and Control Derivatives of an F-15

    NASA Technical Reports Server (NTRS)

    Smith, Mark; Moes, Tim

    2006-01-01

    A technique for real-time estimation of stability and control derivatives (derivatives of moment coefficients with respect to control-surface deflection angles) was used to support a flight demonstration of a concept of an indirect-adaptive intelligent flight control system (IFCS). Traditionally, parameter identification, including estimation of stability and control derivatives, is done post-flight. However, for the indirect-adaptive IFCS concept, parameter identification is required during flight so that the system can modify control laws for a damaged aircraft. The flight demonstration was carried out on a highly modified F-15 airplane (see Figure 1). The main objective was to estimate the stability and control derivatives of the airplane in nearly real time. A secondary goal was to develop a system to automatically assess the quality of the results, so as to be able to tell a learning neural network which data to use. Parameter estimation was performed by use of Fourier-transform regression (FTR) a technique developed at NASA Langley Research Center. FTR is an equation- error technique that operates in the frequency domain. Data are put into the frequency domain by use of a recursive Fourier transform for a discrete frequency set. This calculation simplifies many subsequent calculations, removes biases, and automatically filters out data beyond the chosen frequency range. FTR as applied here was tailored to work with pilot inputs, which produce correlated surface positions that prevent accurate parameter estimates, by replacing half the derivatives with predicted values. FTR was also set up to work only on a recent window of data, to accommodate changes in flight condition. A system of confidence measures was developed to identify quality-parameter estimates that a learning neural network could use. This system judged the estimates primarily on the basis of their estimated variances and of the level of aircraft response. The resulting FTR system was implemented in the Simulink software system and auto-coded in the C programming language for use on the Airborne Research Test System (ARTS II) computer installed in the F-15 airplane. The Simulink model was also used in a control room that utilizes the Ring Buffered Network Bus hardware and software, making it possible to evaluate test points during flights. In-flight parameter estimation was done for piloted and automated maneuvers, primarily at three test conditions. Figure 2 shows results for pitching moment due to symmetric stabilator actuations for a series of three pitch doublet maneuvers (in a doublet maneuver, a command to change attitude in a given direction by a given amount is followed immediately by a command to change attitude in the opposite direction by the same amount). A time window of 5 seconds was used. The portions of the curves shown in red are those that passed the confidence tests. The technique showed good convergence for most derivatives for both kinds of maneuvers - typically within a few seconds. The confidence tests were marginally successful, and it would be necessary to refine them for use in an IFCS.

  4. Inference of directional selection and mutation parameters assuming equilibrium.

    PubMed

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Liu, Z.; Zhang, S.

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  6. Study of Parameters And Methods of LL-Ⅳ Distributed Hydrological Model in DMIP2

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, J.; Wang, X.; Yang, C.; Zhao, Y.; Zhou, H.

    2008-05-01

    : The Physics-based distributed hydrological model is considered as an important developing period from the traditional experience-hydrology to the physical hydrology. The Hydrology Laboratory of the NOAA National Weather Service proposes the first and second phase of the Distributed Model Intercomparison Project (DMIP),that it is a great epoch-making work. LL distributed hydrological model has been developed to the fourth generation since it was established in 1997 on the Fengman-I district reservoir area (11000 km2).The LL-I distributed hydrological model was born with the applications of flood control system in the Fengman-I in China. LL-II was developed under the DMIP-I support, it is combined with GIS, RS, GPS, radar rainfall measurement.LL-III was established along with Applications of LL Distributed Model on Water Resources which was supported by the 973-projects of The Ministry of Science and Technology of the People's Republic of China. LL-Ⅳ was developed to face China's water problem. Combined with Blue River and the Baron Fork River basin of DMIP-II, the convection-diffusion equation of non-saturated and saturated seepage was derived from the soil water dynamics and continuous equation. In view of the technical characteristics of the model, the advantage of using convection-diffusion equation to compute confluence overall is longer period of predictable, saving memory space, fast budgeting, clear physical concepts, etc. The determination of parameters of hydrological model is the key, including experience coefficients and parameters of physical parameters. There are methods of experience, inversion, and the optimization to determine the model parameters, and each has advantages and disadvantages. This paper briefly introduces the LL-Ⅳ distribution hydrological model equations, and particularly introduces methods of parameters determination and simulation results on Blue River and Baron Fork River basin for DMIP-II. The soil moisture diffusion coefficient and coefficient of hydraulic conductivity are involved all through the LL-Ⅳ distribution of runoff and slope convergence model, used mainly empirical formula to determine. It's used optimization methods to calculate the two parameters of evaporation capacity (coefficient of bare land and vegetation land), two parameters of interception and wave velocity of Overland Flow, interflow and groundwater. The approach of determining wave velocity of River Network confluence and diffusion coefficient is: 1. Estimate roughness based mainly on digital information such as land use, soil texture, etc. 2.Establish the empirical formula. Another method is called convection-diffusion numerical inversion.

  7. Intensity limits of the PSI Injector II cyclotron

    NASA Astrophysics Data System (ADS)

    Kolano, A.; Adelmann, A.; Barlow, R.; Baumgarten, C.

    2018-03-01

    We investigate limits on the current of the PSI Injector II high intensity separate-sector isochronous cyclotron, in its present configuration and after a proposed upgrade. Accelerator Driven Subcritical Reactors, neutron and neutrino experiments, and medical isotope production all benefit from increases in current, even at the ∼ 10% level: the PSI cyclotrons provide relevant experience. As space charge dominates at low beam energy, the injector is critical. Understanding space charge effects and halo formation through detailed numerical modelling gives clues on how to maximise the extracted current. Simulation of a space-charge dominated low energy high intensity (9.5 mA DC) machine, with a complex collimator set up in the central region shaping the bunch, is not trivial. We use the OPAL code, a tool for charged-particle optics calculations in large accelerator structures and beam lines, including 3D space charge. We have a precise model of the present (production) Injector II, operating at 2.2 mA current. A simple model of the proposed future (upgraded) configuration of the cyclotron is also investigated. We estimate intensity limits based on the developed models, supported by fitted scaling laws and measurements. We have been able to perform more detailed analysis of the bunch parameters and halo development than any previous study. Optimisation techniques enable better matching of the simulation set-up with Injector II parameters and measurements. We show that in the production configuration the beam current scales to the power of three with the beam size. However, at higher intensities, 4th power scaling is a better fit, setting the limit of approximately 3 mA. Currents of over 5 mA, higher than have been achieved to date, can be produced if the collimation scheme is adjusted.

  8. Immunogenicity of Anti-HLA Antibodies in Pancreas and Islet Transplantation.

    PubMed

    Chaigne, Benjamin; Geneugelijk, Kirsten; Bédat, Benoît; Ahmed, Mohamed Alibashe; Hönger, Gideon; De Seigneux, Sophie; Demuylder-Mischler, Sandrine; Berney, Thierry; Spierings, Eric; Ferrari-Lacraz, Sylvie; Villard, Jean

    2016-11-01

    The aim of the current study was to characterize the anti-HLA antibodies before and after pancreatic islet or pancreas transplantation. We assessed the risk of anti-donor-specific antibody (DSA) sensitization in a single-center, retrospective clinical study at Geneva University Hospital. Data regarding clinical characteristics, graft outcome, HLA mismatch, donor HLA immunogenicity, and anti-HLA antibody characteristics were collected. Between January 2008 and July 2014, 18 patients received islet transplants, and 26 patients received a pancreas transplant. Eleven out of 18 patients (61.1%) in the islet group and 12 out of 26 patients (46.2%) in the pancreas group had anti-HLA antibodies. Six patients (33.3%) developed DSAs against HLA of the islets, and 10 patients (38.4%) developed DSAs against HLA of the pancreas. Most of the DSAs were at a low level. Several parameters such as gender, number of times cells were transplanted, HLA mismatch, eplet mismatch and PIRCHE-II numbers, rejection, and infection were analyzed. Only the number of PIRCHE-II was associated with the development of anti-HLA class II de novo DSAs. Overall, the development of de novo DSAs did not influence graft survival as estimated by insulin independence. Our results indicated that pretransplant DSAs at low levels do not restrict islet or pancreas transplantation [especially islet transplantation (27.8% vs. 15.4.%)]. De novo DSAs do occur at a similar rate in both pancreas and islet transplant recipients (mainly of class II), and the immunogenicity of donor HLA is a parameter that should be taken into consideration. When combined with an immunosuppressive regimen and close follow-up, development of low levels of DSAs was not found to result in reduced graft survival or graft function in the current study.

  9. Right mini-parasternotomy may be a good minimally invasive alternative to full sternotomy for cardiac valve operations-a propensity-adjusted analysis.

    PubMed

    Chiu, K M; Chen, R J; Lin, T Y; Chen, J S; Huang, J H; Huang, C Y; Chu, S H

    2014-03-26

    Limited realworld data existed for miniparasternotomy approach with good sample size in Asian cohorts and most previous studies were eclipsed by case heterogeneity. The goal of this study was to compare safety and quality outcomes of cardiac noncoronary valve operations by miniparasternotomy and full sternotomy approaches on riskadjusted basis. From our hospital database, we retrieved the cases of non-coronary valve operations from 1 January 2005 to 31 December 2012, including re-do, emergent, and combined procedures. Estimated EuroScore-II and propensity score for choosing mini-parasternotomy were adjusted for in the regression models on hospital mortality, complications (pneumonia, stroke, sepsis, etc.), and quality parameters (length of stay, ICU time, ventilator time, etc.). Non-complicated cases, defined as survival to discharge, ventilator use not over one week, and intensive care unit stay not over two weeks, were used for quality parameters. There were 283 miniparasternotomy and 177 full sternotomy cases. EuroScore-II differed significantly (medians 2.1 vs. 4.7, p<0.001). Propensity scores for choosing miniparasternotomy were higher with lower EuroScore-II (OR=0.91 per 1%, p<0.001), aortic regurgitation (OR=2.3, p=0.005), and aortic non-mitral valve disease (OR=3.9, p<0.001). Adjusted for propensity score and EuroScore-II, mini-parasternotomy group had less pneumonia (OR=0.32, p=0.043), less sepsis (OR=0.31, p=0.045), and shorter non-complicated length of stay (coefficient=7.2 (day), p<0.001) than full sternotomy group, whereas Kaplan-Meier survival, non-complicated ICU time, non-complicated ventilator time, and 30-day mortality did not differ significantly. The propensity-adjusted analysis demonstrated encouraging safety and quality outcomes for mini-parasternotomy valve operation in carefully selected patients.

  10. Pharmacokinetic analysis of flomoxef in children undergoing cardiopulmonary bypass and modified ultrafiltration.

    PubMed

    Masuda, Zenichi; Kurosaki, Yuji; Ishino, Kozo; Yamauchi, Keita; Sano, Shunji

    2008-04-01

    Cardiopulmonary bypass (CPB) induces changes in the pharmacokinetics of drugs. The purpose of this study was to model the pharmacokinetics of flomoxef, a cephalosporin antibiotic, in pediatric cardiac surgery. Each patient received a flomoxef dose of 30 mg/kg as a bolus after the induction of anesthesia and an additional dose (1 g for a child weighing < 10 kg, 2 g for > or = 10 kg) was injected into the CPB prime. Modified ultrafiltration (MUF) was routinely performed. Blood samples, urine, and ultrafiltrate were collected. In seven patients (group I), serum flomoxef concentration-time courses were analyzed by a modified two-compartment model. Utilizing the estimated parameters, serum concentrations were simulated in another eight patients (group II). The initiation of CPB resulted in an abrupt increase in serum flomoxef concentrations in group I; however, concentrations declined biexponentially. The amount of excreted flomoxef in the urine and by MUF was 47% +/- 8% of the total administered dose. In group II, an excellent fit was found between the values calculated by the program and the observed serum concentrations expressed; most of the performance errors were <1.0. There was no difference in any kinetic parameter between group I and groups I + II (n = 15). The pharmacokinetics of flomoxef in children undergoing CPB and MUF were well fitted to a modified two-compartment model. Using the kinetic data from this study, the individualization of dosage regimens for prophylactic use of flomoxef might be possible.

  11. Parameters of Six Selected Galactic Potential Models

    NASA Astrophysics Data System (ADS)

    Bajkova, Anisa; Bobylev, Vadim

    2017-11-01

    This paper is devoted to the refinement of the parameters of the six three-component (bulge, disk, halo) axisymmetric Galactic gravitational potential models on the basis of modern data on circular velocities of Galactic objects located at distances up to 200 kpc from the Galactic center. In all models the bulge and disk are described by the Miyamoto-Nagai expressions. To describe the halo, the models of Allen-Santillán (I), Wilkinson-Evans (II), Navarro- Frenk-White (III), Binney (IV), Plummer (V), and Hernquist (VI) are used. The sought-for parameters of potential models are determined by fitting the model rotation curves to the measured velocities, taking into account restrictions on the local dynamical matter density p⊙ - 0.1M⊙ pc-3 and the vertical force |Kz=1.1|/2πG = 77M⊙ pc-2. A comparative analysis of the refined potential models is made and for each of the models the estimates of a number of the Galactic characteristics are presented.

  12. PET Pharmacokinetic Modelling

    NASA Astrophysics Data System (ADS)

    Müller-Schauenburg, Wolfgang; Reimold, Matthias

    Positron Emission Tomography is a well-established technique that allows imaging and quantification of tissue properties in-vivo. The goal of pharmacokinetic modelling is to estimate physiological parameters, e.g. perfusion or receptor density from the measured time course of a radiotracer. After a brief overview of clinical application of PET, we summarize the fundamentals of modelling: distribution volume, Fick's principle of local balancing, extraction and perfusion, and how to calculate equilibrium data from measurements after bolus injection. Three fundamental models are considered: (i) the 1-tissue compartment model, e.g. for regional cerebral blood flow (rCBF) with the short-lived tracer [15O]water, (ii) the 2-tissue compartment model accounting for trapping (one exponential + constant), e.g. for glucose metabolism with [18F]FDG, (iii) the reversible 2-tissue compartment model (two exponentials), e.g. for receptor binding. Arterial blood sampling is required for classical PET modelling, but can often be avoided by comparing regions with specific binding with so called reference regions with negligible specific uptake, e.g. in receptor imaging. To estimate the model parameters, non-linear least square fits are the standard. Various linearizations have been proposed for rapid parameter estimation, e.g. on a pixel-by-pixel basis, for the prize of a bias. Such linear approaches exist for all three models; e.g. the PATLAK-plot for trapping substances like FDG, and the LOGAN-plot to obtain distribution volumes for reversibly binding tracers. The description of receptor modelling is dedicated to the approaches of the subsequent lecture (chapter) of Millet, who works in the tradition of Delforge with multiple-injection investigations.

  13. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    PubMed Central

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B.; van Dieën, Jaap H.

    2016-01-01

    Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation. PMID:27834911

  14. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    PubMed

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  15. An offline approach for output-only Bayesian identification of stochastic nonlinear systems using unscented Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erazo, Kalil; Nagarajaiah, Satish

    2017-06-01

    In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.

  16. Experimental investigation of mode I fracture for brittle tube-shaped particles

    NASA Astrophysics Data System (ADS)

    Stasiak, Marta; Combe, Gaël; Desrues, Jacques; Richefeu, Vincent; Villard, Pascal; Armand, Gilles; Zghondi, Jad

    2017-06-01

    We focus herein on the mechanical behavior of highly crushable grains. The object of our interest, named shell, is a hollow cylinder grain with ring cross-section, made of baked clay. The objective is to model the fragmentation of such shells, by means of discrete element (DE) approach. To this end, fracture modes I (opening fracture) and II (in-plane shear fracture) have to be investigated experimentally. This paper is essentially dedicated to mode I fracture. Therefore, a campaign of Brazilian-like compression tests, that result in crack opening, has been performed. The distribution of the occurrence of tensile strength is shown to obey a Weibull distribution for the studied shells, and Weibull's modulus was quantified. Finally, an estimate of the numerical/physical parameters required in a DE model (local strength), is proposed on the basis of the energy required to fracture through a given surface in mode I or II.

  17. Wavefront propagation simulations for a UV/soft x-ray beamline: Electron Spectro-Microscopy beamline at NSLS-II

    NASA Astrophysics Data System (ADS)

    Canestrari, N.; Bisogni, V.; Walter, A.; Zhu, Y.; Dvorak, J.; Vescovo, E.; Chubar, O.

    2014-09-01

    A "source-to-sample" wavefront propagation analysis of the Electron Spectro-Microscopy (ESM) UV / soft X-ray beamline, which is under construction at the National Synchrotron Light Source II (NSLS-II) in the Brookhaven National Laboratory, has been conducted. All elements of the beamline - insertion device, mirrors, variable-line-spacing gratings and slits - are included in the simulations. Radiation intensity distributions at the sample position are displayed for representative photon energies in the UV range (20 - 100 eV) where diffraction effects are strong. The finite acceptance of the refocusing mirrors is the dominating factor limiting the spatial resolution at the sample (by ~3 μm at 20 eV). Absolute estimates of the radiation flux and energy resolution at the sample are also obtained from the electromagnetic calculations. The analysis of the propagated UV range undulator radiation at different deflection parameter values demonstrates that within the beamline angular acceptance a slightly "red-shifted" radiation provides higher flux at the sample and better energy resolution compared to the on-axis resonant radiation of the fundamental harmonic.

  18. Volume changes and electrostriction in the primary photoreactions of various photosynthetic systems: estimation of dielectric coefficient in bacterial reaction centers and of the observed volume changes with the Drude-Nernst equation.

    PubMed

    Mauzerall, David; Hou, Jian-Min; Boichenko, Vladimir A

    2002-01-01

    Photoacoustics (PA) allows the determination of enthalpy and volume changes of photoreactions in photosynthetic reaction centers on the 0.1-10 mus time scale. These include the bacterial centers from Rb. sphaeroides, PS I and PS II centers from Synechocystis and in whole cells. In vitro and in vivo PA data on PS I and PS II revealed that both the volume change (-26 A(3)) and reaction enthalpy (-0.4 eV) in PS I are the same as those in the bacterial centers. However the volume change in PS II is small and the enthalpy far larger, -1 eV. Assigning the volume changes to electrostriction allows a coherent explanation of these observations. One can explain the large volume decrease in the bacterial centers with an effective dielectric coefficient of approximately 4. This is a unique approach to this parameter so important in estimation of protein energetics. The value of the volume contraction for PS I can only be explained if the acceptor is the super- cluster (Fe(4)S(4))(Cys(4)) with charge change from -1 to -2. The small volume change in PS II is explained by sub-mus electron transfer from Y(Z) anion to P(680) cation, in which charge is only moved from the Y(Z) anion to the Q(A) with no charge separation or with rapid proton transfer from oxidized Y(Z) to a polar region and thus very little change in electrostriction. At more acid pH equally rapid proton transfer from a neighboring histidine to a polar region may be caused by the electric field of the P(680) cation.

  19. Biases in Metallicity Measurements from Global Galaxy Spectra: The Effects of Flux Weighting and Diffuse Ionized Gas Contamination

    NASA Astrophysics Data System (ADS)

    Sanders, Ryan L.; Shapley, Alice E.; Zhang, Kai; Yan, Renbin

    2017-12-01

    Galaxy metallicity scaling relations provide a powerful tool for understanding galaxy evolution, but obtaining unbiased global galaxy gas-phase oxygen abundances requires proper treatment of the various line-emitting sources within spectroscopic apertures. We present a model framework that treats galaxies as ensembles of H II and diffuse ionized gas (DIG) regions of varying metallicities. These models are based upon empirical relations between line ratios and electron temperature for H II regions, and DIG strong-line ratio relations from SDSS-IV MaNGA IFU data. Flux-weighting effects and DIG contamination can significantly affect properties inferred from global galaxy spectra, biasing metallicity estimates by more than 0.3 dex in some cases. We use observationally motivated inputs to construct a model matched to typical local star-forming galaxies, and quantify the biases in strong-line ratios, electron temperatures, and direct-method metallicities as inferred from global galaxy spectra relative to the median values of the H II region distributions in each galaxy. We also provide a generalized set of models that can be applied to individual galaxies or galaxy samples in atypical regions of parameter space. We use these models to correct for the effects of flux-weighting and DIG contamination in the local direct-method mass-metallicity and fundamental metallicity relations, and in the mass-metallicity relation based on strong-line metallicities. Future photoionization models of galaxy line emission need to include DIG emission and represent galaxies as ensembles of emitting regions with varying metallicity, instead of as single H II regions with effective properties, in order to obtain unbiased estimates of key underlying physical properties.

  20. A proof of concept phase II non-inferiority criterion.

    PubMed

    Neuenschwander, Beat; Rouyrre, Nicolas; Hollaender, Norbert; Zuber, Emmanuel; Branson, Michael

    2011-06-15

    Traditional phase III non-inferiority trials require compelling evidence that the treatment vs control effect bfθ is better than a pre-specified non-inferiority margin θ(NI) . The standard approach compares this margin to the 95 per cent confidence interval of the effect parameter. In the phase II setting, in order to declare Proof of Concept (PoC) for non-inferiority and proceed in the development of the drug, different criteria that are specifically tailored toward company internal decision making may be more appropriate. For example, less evidence may be needed as long as the effect estimate is reasonably convincing. We propose a non-inferiority design that addresses the specifics of the phase II setting. The requirements are that (1) the effect estimate be better than a critical threshold θ(C), and (2) the type I error with regard to θ(NI) is controlled at a pre-specified level. This design is compared with the traditional design from a frequentist as well as a Bayesian perspective, where the latter relies on the Level of Proof (LoP) metric, i.e. the probability that the true effect is better than effect values of interest. Clinical input is required to establish the value θ(C), which makes the design transparent and improves interactions within clinical teams. The proposed design is illustrated for a non-inferiority trial for a time-to-event endpoint in oncology. Copyright © 2011 John Wiley & Sons, Ltd.

  1. O/H-N/O: the curious case of NGC 4670

    NASA Astrophysics Data System (ADS)

    Kumari, Nimisha; James, Bethan L.; Irwin, Mike J.; Amorín, Ricardo; Pérez-Montero, Enrique

    2018-05-01

    We use integral field spectroscopic (IFS) observations from Gemini Multi-Object Spectrograph North (GMOS-N) of a group of four H II regions and the surrounding gas in the central region of the blue compact dwarf (BCD) galaxy NGC 4670. At spatial scales of ˜9 pc, we map the spatial distribution of a variety of physical properties of the ionized gas: internal dust attenuation, kinematics, stellar age, star formation rate, emission-line ratios, and chemical abundances. The region of study is found to be photoionized. Using the robust direct Te method, we estimate metallicity, nitrogen-to-oxygen ratio, and helium abundance of the four H II regions. The same parameters are also mapped for the entire region using the HII-CHI-mistry code. We find that log(N/O) is increased in the region where the Wolf-Rayet bump is detected. The region coincides with the continuum region, around which we detect a slight increase in He abundance. We estimate the number of WC4, WN2-4, and WN7-9 stars from the integrated spectrum of WR bump region. We study the relation between log(N/O) and 12 + log(O/H) using the spatially resolved data of the field of view as well as the integrated data of the H II regions from 10 BCDs. We find an unexpected negative trend between N/O and metallicity. Several scenarios are explored to explain this trend, including nitrogen enrichment, and variations in star formation efficiency via chemical evolution models.

  2. Spectral studies, thermal investigation and biological activity of some metal complexes derived from (E)-N‧-(1-(4-aminophenyl)ethylidene)morpholine-4-carbothiohydrazide

    NASA Astrophysics Data System (ADS)

    El-Samanody, El-Sayed A.; Polis, Magdy W.; Emara, Esam M.

    2017-09-01

    A new series of biologically active Co(II), Ni(II), Cu(II), Zn(II) and Cd(II) complexes derived from the novel thiosemicarbazone ligand; (E)-N‧-(1-(4-aminophenyl)ethylidene)morpholine-4-carbothiohydrazide (HL) were synthesized. The mode of bonding of the ligand and the geometrical structures of its metal complexes were achieved by different analytical and spectral methods. The ligand coordinated with metal ions in a neutral bidentate fashion through the thione sulfur and azomethine nitrogen atoms. All metal complexes adopted octahedral geometry, except Cu(II) complexes (3, 6, 7) which have a square planar structure. The general thermal decomposition pathways of the ligand along with its metal complexes were explained. The thermal stability of the complexes is controlled by the number of outer and inner sphere water molecules, ionic radii and the steric hindrance. The activation thermodynamic parameters; (activation energy (E*), enthalpy of activation (ΔH*), entropy of activation (ΔS*) and Gibbs free energy (ΔG*)) along with order of reaction (n) were estimated from DTG curves. The ESR spectra of Cu(II) complexes indicated that (dx2-y2)1 is the ground state with covalence character of metal-ligand bonds. The molluscicidal and biochemical effects of the ligand and its Ni(II); Cu(II) complexes (2; 3, 5, 7) along with their combinations with metaldehyde were screened in vitro on the mucous gland of Eobania vermiculata. The tested compounds exhibited a significant toxicity against the tested animals and have almost the same toxic effect of metaldehyde which increases the mucous secretion of the snails and leads to death.

  3. Oral salmon calcitonin induced suppression of urinary collagen type II degradation in postmenopausal women: a new potential treatment of osteoarthritis.

    PubMed

    Bagger, Yu Z; Tankó, László B; Alexandersen, Peter; Karsdal, Morten A; Olson, Melvin; Mindeholm, Linda; Azria, Moïse; Christiansen, Claus

    2005-09-01

    To assess the efficacy of 3 months of oral salmon calcitonin (sCT) on cartilage degradation as estimated by the changes in the urinary excretion of C-terminal telopeptide of collagen type II (CTX-II), and to investigate whether the response of oral sCT to urinary CTX-II depends on the baseline level of cartilage turnover. This was a randomized, double blind, placebo-controlled clinical setting including 152 Danish postmenopausal women aged 55-85. The subjects received treatment with the different doses of sCT (0.15, 0.4, 1.0, or 2.5 mg) combined with Eligen technology-based carrier molecule (200 mg), or placebo for 3 months. The efficacy parameter was the changes in the 24-h excretion of urinary CTX-I/II corrected for creatinine excretion at month 3. sCT induced a significant dose-dependent decrease in 24-h urinary CTX-II excretion. Similar dose-dependent responses were found in 24-h urinary CTX-I. When stratifying the study population into tertiles of baseline urinary CTX-II, the present osteoarthritic symptoms and definite cases of osteoarthritis (OA) were significantly more frequent in women in the highest tertile of CTX-II (mean 391 +/- 18 ng/mmol). Women who received 1.0 mg of sCT and had the highest cartilage turnover presented the greatest decrease in urinary CTX-II after 3 months of treatment. In addition to its pronounced effect on bone resorption, this novel oral sCT formulation may also reduce cartilage degradation and thereby provide therapeutic benefit in terms of chondroprotection. Women with high cartilage turnover are more likely to benefit from oral sCT treatment.

  4. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  5. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian—MCMC method

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng

    2013-02-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  6. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  7. Newly developed vaginal atrophy symptoms II and vaginal pH: a better correlation in vaginal atrophy?

    PubMed

    Tuntiviriyapun, P; Panyakhamlerd, K; Triratanachat, S; Chatsuwan, T; Chaikittisilpa, S; Jaisamrarn, U; Taechakraichana, N

    2015-04-01

    The primary objective of this study was to evaluate the correlation among symptoms, signs, and the number of lactobacilli in postmenopausal vaginal atrophy. The secondary objective was to develop a new parameter to improve the correlation. A cross-sectional descriptive study. Naturally postmenopausal women aged 45-70 years with at least one clinical symptom of vaginal atrophy of moderate to severe intensity were included in this study. All of the objective parameters (vaginal atrophy score, vaginal pH, the number of lactobacilli, vaginal maturation index, and vaginal maturation value) were evaluated and correlated with vaginal atrophy symptoms. A new parameter of vaginal atrophy, vaginal atrophy symptoms II, was developed and consists of the two most bothersome symptoms (vaginal dryness and dyspareunia). Vaginal atrophy symptoms II was analyzed for correlation with the objective parameters. A total of 132 naturally postmenopausal women were recruited for analysis. Vaginal pH was the only objective parameter found to have a weak correlation with vaginal atrophy symptoms (r = 0.273, p = 0.002). The newly developed vaginal atrophy symptoms II parameter showed moderate correlation with vaginal pH (r = 0.356, p < 0.001) and a weak correlation with the vaginal atrophy score (r = 0.230, p < 0.001). History of sexual intercourse within 3 months was associated with a better correlation between vaginal atrophy symptoms and the objective parameters. Vaginal pH was significantly correlated with vaginal atrophy symptoms. The newly developed vaginal atrophy symptoms II was associated with a better correlation. The vaginal atrophy symptoms II and vaginal pH may be better tools for clinical evaluation and future study of the vaginal ecosystem.

  8. The influence of the call with a mobile phone on heart rate variability parameters in healthy volunteers.

    PubMed

    Andrzejak, Ryszard; Poreba, Rafal; Poreba, Malgorzata; Derkacz, Arkadiusz; Skalik, Robert; Gac, Pawel; Beck, Boguslaw; Steinmetz-Beck, Aleksandra; Pilecki, Witold

    2008-08-01

    It is possible that electromagnetic field (EMF) generated by mobile phones (MP) may have an influence on the autonomic nervous system (ANS) and modulates the function of circulatory system. The aim of the study was to estimate the influence of the call with a mobile phone on heart rate variability (HRV) in young healthy people. The time and frequency domain HRV analyses were performed to assess the changes in sympathovagal balance in a group of 32 healthy students with normal electrocardiogram (ECG) and echocardiogram at rest. The frequency domain variables were computed: ultra low frequency (ULF) power, very low frequency (VLF) power, low frequency (LF) power, high frequency (HF) power and LF/HF ratio was determined. ECG Holter monitoring was recorded in standardized conditions: from 08:00 to 09:00 in the morning in a sitting position, within 20 min periods: before the telephone call (period I), during the call with use of mobile phone (period II), and after the telephone call (period III). During 20 min call with a mobile phone time domain parameters such as standard deviation of all normal sinus RR intervals (SDNN [ms]--period I: 73.94+/-25.02, period II: 91.63+/-35.99, period III: 75.06+/-27.62; I-II: p<0.05, II-III: p<0.05) and standard deviation of the averaged normal sinus RR intervals for all 5-mm segments (SDANN [ms]--period I: 47.78+/-22.69, period II: 60.72+/-27.55, period III: 47.12+/-23.21; I-II: p<0.05, II-III: p<0.05) were significantly increased. As well as very low frequency (VLF [ms2]--period I: 456.62+/-214.13, period II: 566.84+/-216.99, period III: 477.43+/-203.94; I-II: p<0.05), low frequency (LF [ms(2)]--period I: 607.97+/-201.33, period II: 758.28+/-307.90, period III: 627.09+/-220.33; I-II: p<0.01, II-III: p<0.05) and high frequency (HF [ms(2)]--period I: 538.44+/-290.63, period II: 730.31+/-445.78, period III: 590.94+/-301.64; I-II: p<0.05) components were the highest and the LF/HF ratio (period I: 1.48+/-0.38, period II: 1.16+/-0.35, period III: 1.46+/-0.40; I-II: p<0.05, II-III: p<0.05) was the lowest during a call with a mobile phone. The tone of the parasympathetic system measured indirectly by analysis of heart rate variability was increased while sympathetic tone was lowered during the call with use of a mobile phone. It was shown that the call with a mobile phone may change the autonomic balance in healthy subjects. Changes in heart rate variability during the call with a mobile phone could be affected by electromagnetic field but the influence of speaking cannot be excluded.

  9. An Empirical Calibration of the Mixing-Length Parameter α

    NASA Astrophysics Data System (ADS)

    Ferraro, Francesco R.; Valenti, Elena; Straniero, Oscar; Origlia, Livia

    2006-05-01

    We present an empirical calibration of the mixing-length free parameter α based on a homogeneous infrared database of 28 Galactic globular clusters spanning a wide metallicity range (-2.15<[Fe/H]<-0.2). Empirical estimates of the red giant effective temperatures have been obtained from infrared colors. Suitable relations linking these temperatures to the cluster metallicity have been obtained and compared to theoretical predictions. An appropriate set of models for the Sun and Population II giants has been computed by using both the standard solar metallicity (Z/X)solar=0.0275 and the most recently proposed value (Z/X)solar=0.0177. We find that when the standard solar metallicity is adopted, a unique value of α=2.17 can be used to reproduce both the solar radius and the Population II red giant temperature. Conversely, when the new solar metallicity is adopted, two different values of α are required: α=1.86 to fit the solar radius and α~2.0 to fit the red giant temperatures. However, it must be noted that regardless the adopted solar reference, the α-parameter does not show any significant dependence on metallicity. Based on observations collected at the European Southern Observatory (ESO), La Silla, Chile. Also based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundacion Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.

  10. Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics

    PubMed Central

    Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.

    2012-01-01

    Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915

  11. Patient-specific biomechanical model of hypoplastic left heart to predict post-operative cardio-circulatory behaviour.

    PubMed

    Cutrì, Elena; Meoli, Alessio; Dubini, Gabriele; Migliavacca, Francesco; Hsia, Tain-Yen; Pennati, Giancarlo

    2017-09-01

    Hypoplastic left heart syndrome is a complex congenital heart disease characterised by the underdevelopment of the left ventricle normally treated with a three-stage surgical repair. In this study, a multiscale closed-loop cardio-circulatory model is created to reproduce the pre-operative condition of a patient suffering from such pathology and virtual surgery is performed. Firstly, cardio-circulatory parameters are estimated using a fully closed-loop cardio-circulatory lumped parameter model. Secondly, a 3D standalone FEA model is build up to obtain active and passive ventricular characteristics and unloaded reference state. Lastly, the 3D model of the single ventricle is coupled to the lumped parameter model of the circulation obtaining a multiscale closed-loop pre-operative model. Lacking any information on the fibre orientation, two cases were simulated: (i) fibre distributed as in the physiological right ventricle and (ii) fibre as in the physiological left ventricle. Once the pre-operative condition is satisfactorily simulated for the two cases, virtual surgery is performed. The post-operative results in the two cases highlighted similar hemodynamic behaviour but different local mechanics. This finding suggests that the knowledge of the patient-specific fibre arrangement is important to correctly estimate the single ventricle's working condition and consequently can be valuable to support clinical decision. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  13. To Evaluate the Correlation Between Skeletal and Dental Parameters to the Amount of Crowding in Class II Div. 1 Malocclusions.

    PubMed

    Singh, Shivani; Shivaprakash, G

    2017-09-01

    Crowding of teeth is one of the most common problem that motivates the patient to seek orthodontic treatment. Determination of etiology of crowding could have a significant effect on treatment planning and prognosis of Class II malocclusion. Aim of this study was to evaluate the relationship of skeletal and dental parameters to amount of dental crowding in patients with Class II Divison 1 (div.1) malocclusion. Pretreatment lateral cephalograms and dental casts of 60 patients with skeletal Class II malocclusion were collected for the study. The sample was divided into two groups according to severity of pretreatment mandibular crowding. Group I consisted of cases with crowding ≥3 mm and Group II with crowding <3 mm. Lateral cephalograms for each patient was manually traced and skeletal parameters (effective maxillary and mandibular length, mandibular plane angle, Y Axis, lower anterior face height) and dental parameters (axial inclination of lower incisor, inclination of lower incisor to mandibular plane, interincisal angle) were measured. Unpaired t-test was used for intergroup comparison and relationship between different measurements was investigated using Pearson correlation coefficient. Among the skeletal parameters measured, only effective mandibular length exhibited statistically significant difference between the two groups. No statistically significant difference was found between the two groups for any of the dental parameters. Significant inverse correlation was found between mandibular crowding and effective mandibular length. Subjects with Class II div.1 malocclusion and moderate to severe mandibular crowding have significantly smaller effective mandibular base length than subjects with the same malocclusion and slight mandibular crowding.

  14. Estimation of parameters of constant elasticity of substitution production functional model

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  15. Integral field spectroscopy of H II regions in M33

    NASA Astrophysics Data System (ADS)

    López-Hernández, Jesús; Terlevich, Elena; Terlevich, Roberto; Rosa-González, Daniel; Díaz, Ángeles; García-Benito, Rubén; Vílchez, José; Hägele, Guillermo

    2013-03-01

    Integral field spectroscopy is presented for star-forming regions in M33. A central area of 300 × 500 pc2 and the external H II region IC 132, at a galactocentric distance ˜19 arcmin (4.69 kpc), were observed with the Potsdam Multi-Aperture Spectrophotometer instrument at the 3.5-m telescope of the Centro Astronómico Hispano-Alemán (CAHA, aka Calar Alto Observatory). The spectral coverage goes from 3600 Å to 1 μm to include from [O II] λ3727 Å to the near-infrared lines required for deriving sulphur electron temperature and abundance diagnostics. Local conditions within individual H II regions are presented in the form of emission-line fluxes and physical conditions for each spatial resolution element (spaxel) and for segments with similar Hα surface brightness. A clear dichotomy is observed when comparing the central to outer disc H II regions. While the external H II region has higher electron temperature plus larger Hβ equivalent width, size and excitation, the central region has higher extinction and metal content. The dichotomy extends to the Baldwin-Phillips-Terlevich (BPT) diagnostic diagrams that show two orthogonal broad distributions of points. By comparing with pseudo-3D photoionization models, we conclude that the bulk of observed differences are probably related to a different ionization parameter and metallicity. Wolf-Rayet (WR) features are detected in IC 132, and resolved into two concentrations whose integrated spectra were used to estimate the characteristic number of WR stars. No WR features were detected in the central H II regions despite their higher metallicity.

  16. Cosmology with gamma-ray bursts. II. Cosmography challenges and cosmological scenarios for the accelerated Universe

    NASA Astrophysics Data System (ADS)

    Demianski, Marek; Piedipalumbo, Ester; Sawant, Disha; Amati, Lorenzo

    2017-02-01

    Context. Explaining the accelerated expansion of the Universe is one of the fundamental challenges in physics today. Cosmography provides information about the evolution of the universe derived from measured distances, assuming only that the space time geometry is described by the Friedman-Lemaitre-Robertson-Walker metric, and adopting an approach that effectively uses only Taylor expansions of basic observables. Aims: We perform a high-redshift analysis to constrain the cosmographic expansion up to the fifth order. It is based on the Union2 type Ia supernovae data set, the gamma-ray burst Hubble diagram, a data set of 28 independent measurements of the Hubble parameter, baryon acoustic oscillations measurements from galaxy clustering and the Lyman-α forest in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), and some Gaussian priors on h and ΩM. Methods: We performed a statistical analysis and explored the probability distributions of the cosmographic parameters. By building up their regions of confidence, we maximized our likelihood function using the Markov chain Monte Carlo method. Results: Our high-redshift analysis confirms that the expansion of the Universe currently accelerates; the estimation of the jerk parameter indicates a possible deviation from the standard ΛCDM cosmological model. Moreover, we investigate implications of our results for the reconstruction of the dark energy equation of state (EOS) by comparing the standard technique of cosmography with an alternative approach based on generalized Padé approximations of the same observables. Because these expansions converge better, is possible to improve the constraints on the cosmographic parameters and also on the dark matter EOS. Conclusions: The estimation of the jerk and the DE parameters indicates at 1σ a possible deviation from the ΛCDM cosmological model.

  17. Quantification of CO2 generation in sedimentary basins through carbonate/clays reactions with uncertain thermodynamic parameters

    NASA Astrophysics Data System (ADS)

    Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.

    2017-09-01

    We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.

  18. Mass and p-factor of the Type II Cepheid OGLE-LMC-T2CEP-098 in a Binary System

    NASA Astrophysics Data System (ADS)

    Pilecki, Bogumił; Gieren, Wolfgang; Smolec, Radosław; Pietrzyński, Grzegorz; Thompson, Ian B.; Anderson, Richard I.; Bono, Giuseppe; Soszyński, Igor; Kervella, Pierre; Nardetto, Nicolas; Taormina, Mónica; Stȩpień, Kazimierz; Wielgórski, Piotr

    2017-06-01

    We present the results of a study of the type II Cepheid (P puls = 4.974 days) in the eclipsing binary system OGLE-LMC-T2CEP-098 (P orb = 397.2 days). The Cepheid belongs to the peculiar W Vir group, for which the evolutionary status is virtually unknown. It is the first single-lined system with a pulsating component analyzed using the method developed by Pilecki et al. We show that the presence of a pulsator makes it possible to derive accurate physical parameters of the stars even if radial velocities can be measured for only one of the components. We have used four different methods to limit and estimate the physical parameters, eventually obtaining precise results by combining pulsation theory with the spectroscopic and photometric solutions. The Cepheid radius, mass, and temperature are 25.3+/- 0.2 {R}⊙ , 1.51+/- 0.09 {M}⊙ , and 5300+/- 100 {{K}}, respectively, while its companion has a similar size (26.3 {R}⊙ ), but is more massive (6.8 {M}⊙ ) and hotter (9500 K). Our best estimate for the p-factor of the Cepheid is 1.30+/- 0.03. The mass, position on the period-luminosity diagram, and pulsation amplitude indicate that the pulsating component is very similar to the Anomalous Cepheids, although it has a much longer period and is redder in color. The very unusual combination of the components suggest that the system has passed through a mass-transfer phase in its evolution. More complicated internal structure would then explain its peculiarity. This paper includes data gathered with the 6.5 m Magellan Clay Telescope at Las Campanas Observatory, Chile.

  19. Therapeutic effect of umbelliferon-α-D-glucopyranosyl-(2(I)→1(II))-α-D-glucopyranoside on adjuvant-induced arthritic rats.

    PubMed

    Kumar, Vikas; Anwar, Firoz; Verma, Amita; Mujeeb, Mohd

    2015-06-01

    The aim and objective of the present investigation was to evaluate the antiarthritic and antioxidant effect of umbelliferon-α-D-glucopyranosyl-(2I→1II)-α-D-glucopyranoside (UFD) in chemically induced arthritic rats. The different doses of the UFD were tested against the turpentine oil (TO), formaldehyde induced acute arthritis and complete fruend's adjuvant (CFA) induced chronic arthritis in Wistar rats. Arthritic assessment and body weight was measured at regular interval till 28 days. On day 28, all the groups animals were anaesthetized, blood were collected from the puncturing the ratro orbital and estimated the hematological parameters. The animals were sacrificed; synovial tissue was extracted and estimated the malonaldehyde (MDA), glutathione (GSH), glutathione peroxidase (GPx) and superoxide dismutase (SOD). The different doses of the UFD showed the protective effect against turpentine oil, formaldehyde induced acute arthritis and CFA induced chronic arthritis at dose dependent manner. Acute model of arthritis such as TOand formaldehyde induced inflammation due to releasing of the inflammatory mediators; significantly inhibited by the UFD at dose dependent manner. CFA induced arthritic rats treated with the different doses of the UFD showed the inhibitory effect on the delayed increase in joint diameter as seen in arthritic control group rats. UFD significantly improved the arthritic index, body weight and confirmed the antiarthritic effect. UFD showed the effect on the hematological parameter such as improved the level of the RBC, Hb and decline the level of the EBC, ESR and confirmed the immune suppressive effect. UFD significantly improved the level of the endogenous antioxidant and confirmed the antioxidant effect. This present investigation suggests that the UFD has prominent antiarthritic impact which can be endorsed to its antiarthritic and antioxidant effects.

  20. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  1. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  2. Multifactorial inheritance with cultural transmission and assortative mating. II. a general model of combined polygenic and cultural inheritance.

    PubMed Central

    Cloninger, C R; Rice, J; Reich, T

    1979-01-01

    A general linear model of combined polygenic-cultural inheritance is described. The model allows for phenotypic assortative mating, common environment, maternal and paternal effects, and genic-cultural correlation. General formulae for phenotypic correlation between family members in extended pedigrees are given for both primary and secondary assortative mating. A FORTRAN program BETA, available upon request, is used to provide maximum likelihood estimates of the parameters from reported correlations. American data about IQ and Burks' culture index are analyzed. Both cultural and genetic components of phenotypic variance are observed to make significant and substantial contributions to familial resemblance in IQ. The correlation between the environments of DZ twins is found to equal that of singleton sibs, not that of MZ twins. Burks' culture index is found to be an imperfect measure of midparent IQ rather than an index of home environment as previously assumed. Conditions under which the parameters of the model may be uniquely and precisely estimated are discussed. Interpretation of variance components in the presence of assortative mating and genic-cultural covariance is reviewed. A conservative, but robust, approach to the use of environmental indices is described. PMID:453202

  3. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  4. IRIS Observations of Spicules and Structures Near the Solar Limb

    NASA Astrophysics Data System (ADS)

    Alissandrakis, C. E.; Vial, J.-C.; Koukras, A.; Buchlin, E.; Chane-Yook, M.

    2018-02-01

    We have analyzed Interface Region Imaging Spectrograph (IRIS) spectral and slit-jaw observations of a quiet region near the South Pole. In this article we present an overview of the observations, the corrections, and the absolute calibration of the intensity. We focus on the average profiles of strong (Mg ii h and k, C ii and Si iv), as well as of weak spectral lines in the near ultraviolet (NUV) and the far ultraviolet (FUV), including the Mg ii triplet, thus probing the solar atmosphere from the low chromosphere to the transition region. We give the radial variation of bulk spectral parameters as well as line ratios and turbulent velocities. We present measurements of the formation height in lines and in the NUV continuum from which we find a linear relationship between the position of the limb and the intensity scale height. We also find that low forming lines, such as the Mg ii triplet, show no temporal variations above the limb associated with spicules, suggesting that such lines are formed in a homogeneous atmospheric layer and, possibly, that spicules are formed above the height of 2''. We discuss the spatio-temporal structure of the atmosphere near the limb from images of intensity as a function of position and time. In these images, we identify p-mode oscillations in the cores of lines formed at low heights above the photosphere, slow-moving bright features in O i and fast-moving bright features in C ii. Finally, we compare the Mg ii k and h line profiles, together with intensity values of the Balmer lines from the literature, with computations from the PROM57Mg non-LTE model, developed at the Institut d' Astrophysique Spatiale, and estimated values of the physical parameters. We obtain electron temperatures in the range of {˜} 8000 K at small heights to {˜} 20 000 K at large heights, electron densities from 1.1× 10^{11} to 4× 10^{10} cm^{-3} and a turbulent velocity of {˜} 24 km s^{-1}.

  5. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  6. Optimal estimation retrieval of aerosol microphysical properties from SAGE~II satellite observations in the volcanically unperturbed lower stratosphere

    NASA Astrophysics Data System (ADS)

    Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.

    2010-05-01

    Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  7. Synergistic estimation of surface parameters from jointly using optical and microwave observations in EOLDAS

    NASA Astrophysics Data System (ADS)

    Timmermans, Joris; Gomez-Dans, Jose; Lewis, Philip; Loew, Alexander; Schlenz, Florian

    2017-04-01

    The large amount of remote sensing data nowadays available provides a huge potential for monitoring crop development, drought conditions and water efficiency. This potential however not been realized yet because algorithms for land surface parameter retrieval mostly use data from only a single sensor. Consequently products that combine different low-level observations from different sensors are hard to find. The lack of synergistic retrieval is caused because it is easier to focus on single sensor types/footprints and temporal observation times, than to find a way to compensate for differences. Different sensor types (microwave/optical) require different radiative transfer (RT) models and also require consistency between the models to have any impact on the retrieval of soil moisture by a microwave instrument. Varying spatial footprints require first proper collocation of the data before one can scale between different resolutions. Considering these problems, merging optical and microwave observations have not been performed yet. The goal of this research was to investigate the potential of integrating optical and microwave RT models within the Earth Observation Land Data Assimilation System (EOLDAS) synergistically to derive biophysical parameters. This system uses a Bayesian data assimilation approach together with observation operators such as the PROSAIL model to estimate land surface parameters. For the purpose of enabling the system to integrate passive microwave radiation (from an ELBARRA II passive microwave radiometer), the Community Microwave Emission Model (CMEM) RT-model, was integrated within the EOLDAS system. In order to quantify the potential, a variety of land surface parameters was chosen to be retrieved from the system, in particular variables that a) impact only optical RT (such as leaf water content and leaf dry matter), b) only impact the microwave RT (such as soil moisture and soil temperature), and c) Leaf Area Index (LAI) that impacts both optical and microwave RT. The results show a high potential when both optical and microwave are used independently. Using only RapidEye only with SAIL RT model, LAI was estimated with R=0.68 with p=0.09, although estimating leaf water content and dry matter showed lower correlations |R|<0.4. The results for retrieving soil temperature and leaf area index retrievals using only (passive microwave) Elbarra-II observations were good with respectively R=[0.85, 0.79], P=[0.0, 0.0], when focusing on dry-spells (of at least 9 days) only the results respectively [R=0.73, and P=0.0], and R=0.89 and R=0.77 for respectively the trend and anomalies. Synergistically using optical and microwave shows also a good potential. This scenario shows that absolute errors improved (with RMSE=1.22 and S=0.89), but with degrading correlations (R=0.59 and P=0.04); the sparse optical observations only improved part of the temporal domain. However in general the synergistic retrieval showed good potential; microwave data provides better information concerning the overall trend of the retrieved LAI due to the regular acquisitions, while optical data provides better information concerning the absolute values of the LAI.

  8. Application of terrestrial laser scanning for coastal geomorphologic research questions in western Greece

    NASA Astrophysics Data System (ADS)

    Hoffmeister, Dirk; Curdt, Constanze; Tilly, Nora; Ntageretzis, Konstantin; Aasen, Helge; Vött, Andreas; Bareth, Georg

    2013-04-01

    Coasts are areas of permanent change, influenced by gradual changes and sudden impacts. In particular, western Greece is a tectonically active region, due to the nearby plate boundary of the Hellenic Arc. The region has suffered from numerous earthquakes and tsunamis during prehistoric and historic times and is thus characterized by a high seismic and tsunami hazard risk. Additionally, strong winter storms may reach considerable dimensions. In this study, terrestrial laser scanning was applied for (i) annual change detection at seven coastal areas of western Greece for three years (2009-2011) and (ii) accurate parameter detection of large boulders, dislocated by high-energy wave impacts. The Riegl LMS-Z420i laser scanner was used in combination with a precise DGPS system (Topcon HiPer Pro) for all surveys. Each scan position and a further target were recorded for georeferencing and merging of the point clouds. (i) For the annual detection of changes, reference points for the base station of the DGPS system were marked. High-resolution digital elevation models (HRDEM) were generated from each dataset of the different years and are compared to each other, resulting in mass balances. (ii) 3D-models of dislocated boulders were reconstructed and parameters (e.g. volume in combination with density measurements, distance and height above present sea-level) were derived for the solution of wave transport equations, which estimate the minimum wave height or velocity that is necessary for boulder movement. (i) Our results show that annual changes are detectable by multi-temporal terrestrial laser scanning. In general, volumetric changes and affected areas are quantifiable and maps of changes can be established. On exposed beach areas, bigger changes were detectable, where seagrass and sand is eroded and gravel accumulated. In opposite, only minor changes for elevated areas are derived. Dislocated boulders on several sites showed no movement. At coastal areas with a high surface roughness and along recent beaches, post-processing of point clouds turned out to be more difficult, due to noise effects by water and shadowing effects. A point to point comparison was used in addition to check the results. (ii) Furthermore, it is possible to obtain highly accurate volumetric data of dislocated boulders by 3D reconstruction. Further parameters, such as inclination, elevation above sea level or the distance of the boulder to the sea can be extracted from the 3D model of the study site. Accurate maps of the geomorphological settings are established. All parameters were incorporated into selected wave transport equations, which regard the variable "mass" as a direct input parameter for the calculation of wave heights and velocities needed for boulder dislocation. Our results were compared to data based on manual measurement of boulder axes and roughly estimated rock density values, which show a combined, general overestimation of ~40%.

  9. [C II] 158 μm EMISSION AS A STAR FORMATION TRACER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrera-Camus, R.; Bolatto, A. D.; Wolfire, M. G.

    2015-02-10

    The [C II] 157.74 μm transition is the dominant coolant of the neutral interstellar gas, and has great potential as a star formation rate (SFR) tracer. Using the Herschel KINGFISH sample of 46 nearby galaxies, we investigate the relation of [C II] surface brightness and luminosity with SFR. We conclude that [C II] can be used for measurements of SFR on both global and kiloparsec scales in normal star-forming galaxies in the absence of strong active galactic nuclei (AGNs). The uncertainty of the Σ{sub [C} {sub II]} – Σ{sub SFR} calibration is ±0.21 dex. The main source of scatter in themore » correlation is associated with regions that exhibit warm IR colors, and we provide an adjustment based on IR color that reduces the scatter. We show that the color-adjusted Σ{sub [C} {sub II]} – Σ{sub SFR} correlation is valid over almost five orders of magnitude in Σ{sub SFR}, holding for both normal star-forming galaxies and non-AGN luminous infrared galaxies. Using [C II] luminosity instead of surface brightness to estimate SFR suffers from worse systematics, frequently underpredicting SFR in luminous infrared galaxies even after IR color adjustment (although this depends on the SFR measure employed). We suspect that surface brightness relations are better behaved than the luminosity relations because the former are more closely related to the local far-UV field strength, most likely the main parameter controlling the efficiency of the conversion of far-UV radiation into gas heating. A simple model based on Starburst99 population-synthesis code to connect SFR to [C II] finds that heating efficiencies are 1%-3% in normal galaxies.« less

  10. Lymphangiogenesis assessed using three methods is related to tumour grade, breast cancer subtype and expression of basal marker.

    PubMed

    Niemiec, Joanna; Adamczyk, Agnieszka; Ambicka, Aleksandra; Mucha-Małecka, Anna; Wysocki, Wojciech; Mituś, Jerzy; Ryś, Janusz

    2012-11-01

    Lymphangiogenesis is a potential indicator of cancer patients' survival. However, there is no standardisation of methodologies applied to the assessment of lymphatic vessel density. In 156 invasive ductal breast cancers (T  1/N+/M0), lymphatic and blood vessels were visualised using podoplanin and CD34, respectively. Based on these markers expression, four parameters were assessed: (i) distribution of podoplanin-stained vessels (DPV) - the percentage of fields with at least one lymphatic vessel (a simple method proposed by us), (ii) lymphatic vessel density (LVD), (iii) LVD to microvessel density ratio (LVD/MVD) and (iv) the expression of podoplanin in cancer-associated fibroblasts. Next, we estimated relations between the above-mentioned parameters and: (i) breast cancer subtype, (ii) tumour grade, and (iii) basal markers expression. We found that intensive lymphangiogenesis, assessed using all studied methods, is positively related to high tumour grade, triple negative or HER2 subtype and expression of basal markers. Whereas, the absence of podoplanin expression in fibroblasts of cancer stroma is related to luminal A subtype, low tumour grade or lack of basal markers expression. Distribution of podoplanin-stained vessels, assessed by a simple method proposed by us (indicating the percentage of fields with at least one lymphatic vessel), might be used instead of the "hot-spot" method.

  11. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  12. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  13. Quantifying the bias in the estimated treatment effect in randomized trials having interim analyses and a rule for early stopping for futility.

    PubMed

    Walter, S D; Han, H; Briel, M; Guyatt, G H

    2017-04-30

    In this paper, we consider the potential bias in the estimated treatment effect obtained from clinical trials, the protocols of which include the possibility of interim analyses and an early termination of the study for reasons of futility. In particular, by considering the conditional power at an interim analysis, we derive analytic expressions for various parameters of interest: (i) the underestimation or overestimation of the treatment effect in studies that stop for futility; (ii) the impact of the interim analyses on the estimation of treatment effect in studies that are completed, i.e. that do not stop for futility; (iii) the overall estimation bias in the estimated treatment effect in a single study with such a stopping rule; and (iv) the probability of stopping at an interim analysis. We evaluate these general expressions numerically for typical trial scenarios. Results show that the parameters of interest depend on a number of factors, including the true underlying treatment effect, the difference that the trial is designed to detect, the study power, the number of planned interim analyses and what assumption is made about future data to be observed after an interim analysis. Because the probability of stopping early is small for many practical situations, the overall bias is often small, but a more serious issue is the potential for substantial underestimation of the treatment effect in studies that actually stop for futility. We also consider these ideas using data from an illustrative trial that did stop for futility at an interim analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. What is the impact of different VLBI analysis setups of the tropospheric delay on precipitable water vapor trends?

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Nilsson, Tobias; Heinkelmann, Robert; Glaser, Susanne; Zus, Florian; Deng, Zhiguo; Schuh, Harald

    2017-04-01

    The quality of the parameters estimated by global navigation satellite systems (GNSS) and very long baseline interferometry (VLBI) are distorted by erroneous meteorological observations applied to model the propagation delay in the electrically neutral atmosphere. For early VLBI sessions with poor geometry, unsuitable constraints imposed on the a priori tropospheric gradients is a source of additional hassle of VLBI analysis. Therefore, climate change indicators deduced from the geodetic analysis, such as the long-term precipitable water vapor (PWV) trends, are strongly affected. In this contribution we investigate the impact of different modeling and parameterization of the propagation delay in the troposphere on the estimates of long-term PWV trends from geodetic VLBI analysis results. We address the influence of the meteorological data source, and of the a priori non-hydrostatic delays and gradients employed in the VLBI processing, on the estimated PWV trends. In particular, we assess the effect of employing temperature and pressure from (i) homogenized in situ observations, (ii) the model levels of the ERA Interim reanalysis numerical weather model and (iii) our own blind model in the style of GPT2w with enhanced parameterization, calculated using the latter data set. Furthermore, we utilize non-hydrostatic delays and gradients estimated from (i) a GNSS reprocessing at GeoForschungsZentrum Potsdam, rigorously considering tropospheric ties, and (ii)) direct ray-tracing through ERA Interim, as additional observations. To evaluate the above, the least-squares module of the VieVS@GFZ VLBI software was appropriately modified. Additionally, we study the noise characteristics of the non-hydrostatic delays and gradients estimated from our VLBI and GNSS analyses as well as from ray-tracing. We have modified the Theil-Sen estimator appropriately to robustly deduce PWV trends from VLBI, GNSS, ray-tracing and direct numerical integration in ERA Interim. We disseminate all our solutions in the latest Tropo-SINEX format.

  15. Diurnal variability of regional cloud and clear-sky radiative parameters derived from GOES data. I - Analysis method. II - November 1978 cloud distributions. III - November 1978 radiative parameters

    NASA Technical Reports Server (NTRS)

    Minnis, P.; Harrison, E. F.

    1984-01-01

    Cloud cover is one of the most important variables affecting the earth radiation budget (ERB) and, ultimately, the global climate. The present investigation is concerned with several aspects of the effects of extended cloudiness, taking into account hourly visible and infrared data from the Geostationary Operational Environmental Satelite (GOES). A methodology called the hybrid bispectral threshold method is developed to extract regional cloud amounts at three levels in the atmosphere, effective cloud-top temperatures, clear-sky temperature and cloud and clear-sky visible reflectance characteristics from GOES data. The diurnal variations are examined in low, middle, high, and total cloudiness determined with this methodology for November 1978. The bulk, broadband radiative properties of the resultant cloud and clear-sky data are estimated to determine the possible effect of the diurnal variability of regional cloudiness on the interpretation of ERB measurements.

  16. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    NASA Technical Reports Server (NTRS)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  17. The log-periodic-AR(1)-GARCH(1,1) model for financial crashes

    NASA Astrophysics Data System (ADS)

    Gazola, L.; Fernandes, C.; Pizzinga, A.; Riera, R.

    2008-02-01

    This paper intends to meet recent claims for the attainment of more rigorous statistical methodology within the econophysics literature. To this end, we consider an econometric approach to investigate the outcomes of the log-periodic model of price movements, which has been largely used to forecast financial crashes. In order to accomplish reliable statistical inference for unknown parameters, we incorporate an autoregressive dynamic and a conditional heteroskedasticity structure in the error term of the original model, yielding the log-periodic-AR(1)-GARCH(1,1) model. Both the original and the extended models are fitted to financial indices of U. S. market, namely S&P500 and NASDAQ. Our analysis reveal two main points: (i) the log-periodic-AR(1)-GARCH(1,1) model has residuals with better statistical properties and (ii) the estimation of the parameter concerning the time of the financial crash has been improved.

  18. Correlative Analysis of GRBs detected by Swift, Konus and HETE

    NASA Technical Reports Server (NTRS)

    Krimm, Hans A.; Barthelmy, S. D.; Gehrels, N.; Hullinger, D.; Sakamoto, T.; Donaghy, T.; Lamb, D. Q.; Pal'shin, V.; Golenetskii, S.; Ricker, G. R.

    2005-01-01

    Swift has now detected a large enough sample of gamma-ray bursts (GRBs) to allow correlation studies of burst parameters. Such studies of earlier data sets have yielded important results leading to further understanding of burst parameters and classifications. This work focuses on seventeen Swift bursts that have also been detected either by Konus-Wind or HETE-II, providing high energy spectra and fits to E(sub peak). Eight of these bursts have spectroscopic redshifts and for others we can estimate redshifts using the variability/luminosity relationship. We can also compare E(sub peak) with E(sub iso), and for those bursts for which a jet break was observed in the afterglow we can derive E(sub g) and test the relationship between E(peak) and E(sub gamma). For all bursts we can derive durations and hardness ratios from the prompt emission.

  19. Isothermogravimetric determination of the enthalpies of vaporization of 1-alkyl-3-methylimidazolium ionic liquids.

    PubMed

    Luo, Huimin; Baker, Gary A; Dai, Sheng

    2008-08-21

    Vaporization enthalpies for two series of ionic liquids (ILs) composed of 1- n-alkyl-3-methylimidazolium cations, [Imm1+] (m=2, 3, 4, 6, 8, or 10), paired with either the bis(trifluoromethanesulfonyl)amide, [Tf2N-], or the bis(perfluoroethylsulfonyl)amide anion, [beti-], were determined using a simple, convenient, and highly reproducible thermogravimetric approach, and from these values, Hildebrand solubility parameters were estimated. Our results reveal two interesting and unanticipated outcomes: (i) methylation at the C2 position of [Imm1+] affords a significantly higher vaporization enthalpy; (ii) in all cases, the [beti-] anion served to lower the enthalpy of vaporization relative to [Tf2N-]. The widespread availability of the apparatus required for these measurements coupled with the ease of automation suggests the broad potential of this methodology for determining this critical parameter in a multitude of ILs.

  20. An inexpensive technique for the time resolved laser induced plasma spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Rizwan, E-mail: rizwan.ahmed@ncp.edu.pk; Ahmed, Nasar; Iqbal, J.

    We present an efficient and inexpensive method for calculating the time resolved emission spectrum from the time integrated spectrum by monitoring the time evolution of neutral and singly ionized species in the laser produced plasma. To validate our assertion of extracting time resolved information from the time integrated spectrum, the time evolution data of the Cu II line at 481.29 nm and the molecular bands of AlO in the wavelength region (450–550 nm) have been studied. The plasma parameters were also estimated from the time resolved and time integrated spectra. A comparison of the results clearly reveals that the time resolved informationmore » about the plasma parameters can be extracted from the spectra registered with a time integrated spectrograph. Our proposed method will make the laser induced plasma spectroscopy robust and a low cost technique which is attractive for industry and environmental monitoring.« less

  1. Photophysical study of meso-phenothiazinyl-porphyrins metallocomplexes

    NASA Astrophysics Data System (ADS)

    Starukhin, Aleksander; Gorski, Aleksander; Knyukshto, Valery; Panarin, Andrei; Pavich, Tatiana; Gaina, Luiza; Gal, Emese

    2017-10-01

    Photophysical parameters of a set of metallocomplexes of meso-phenylthiazinylporphyrins with Zn (II), Pd (II) and Cu (II) ions were studied in different organic solvents, solid solutions and polymeric matrices at room and liquid nitrogen temperatures. The dependence of the spectral and photophysical parameters on changing the molecular structure with increasing number of branched substituents attached to aryl groups in different positions of the porphyrin macrocycle has been established.

  2. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  3. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  4. Soil erosion modelled with USLE and PESERA using QuickBird derived vegetation parameters in an alpine catchment

    NASA Astrophysics Data System (ADS)

    Meusburger, K.; Konz, N.; Schaub, M.; Alewell, C.

    2010-06-01

    The focus of soil erosion research in the Alps has been in two categories: (i) on-site measurements, which are rather small scale point measurements on selected plots often constrained to irrigation experiments or (ii) off-site quantification of sediment delivery at the outlet of the catchment. Results of both categories pointed towards the importance of an intact vegetation cover to prevent soil loss. With the recent availability of high-resolution satellites such as IKONOS and QuickBird options for detecting and monitoring vegetation parameters in heterogeneous terrain have increased. The aim of this study is to evaluate the usefulness of QuickBird derived vegetation parameters in soil erosion models for alpine sites by comparison to Cesium-137 (Cs-137) derived soil erosion estimates. The study site (67 km 2) is located in the Central Swiss Alps (Urseren Valley) and is characterised by scarce forest cover and strong anthropogenic influences due to grassland farming for centuries. A fractional vegetation cover (FVC) map for grassland and detailed land-cover maps are available from linear spectral unmixing and supervised classification of QuickBird imagery. The maps were introduced to the Pan-European Soil Erosion Risk Assessment (PESERA) model as well as to the Universal Soil Loss Equation (USLE). Regarding the latter model, the FVC was indirectly incorporated by adapting the C factor. Both models show an increase in absolute soil erosion values when FVC is considered. In contrast to USLE and the Cs-137 soil erosion rates, PESERA estimates are low. For the USLE model also the spatial patterns improved and showed "hotspots" of high erosion of up to 16 t ha -1 a -1. In conclusion field measurements of Cs-137 confirmed the improvement of soil erosion estimates using the satellite-derived vegetation data.

  5. CEval: All-in-one software for data processing and statistical evaluations in affinity capillary electrophoresis.

    PubMed

    Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina

    2016-05-06

    We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Impact of spurious shear on cosmological parameter estimates from weak lensing observables

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán; ...

    2014-12-30

    We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less

  7. Application of the Junge- and Pankow-equation for estimating indoor gas/particle distribution and exposure to SVOCs

    NASA Astrophysics Data System (ADS)

    Salthammer, Tunga; Schripp, Tobias

    2015-04-01

    In the indoor environment, distribution and dynamics of an organic compound between gas phase, particle phase and settled dust must be known for estimating human exposure. This, however, requires a detailed understanding of the environmentally important compound parameters, their interrelation and of the algorithms for calculating partitioning coefficients. The parameters of major concern are: (I) saturation vapor pressure (PS) (of the subcooled liquid); (II) Henry's law constant (H); (III) octanol/water partition coefficient (KOW); (IV) octanol/air partition coefficient (KOA); (V) air/water partition coefficient (KAW) and (VI) settled dust properties like density and organic content. For most of the relevant compounds reliable experimental data are not available and calculated gas/particle distributions can widely differ due to the uncertainty in predicted Ps and KOA values. This is not a big problem if the target compound is of low (<10-6 Pa) or high (>10-2 Pa) volatility, but in the intermediate region even small changes in Ps or KOA will have a strong impact on the result. Moreover, the related physical processes might bear large uncertainties. The KOA value can only be used for particle absorption from the gas phase if the organic portion of the particle or dust is high. The Junge- and Pankow-equation for calculating the gas/particle distribution coefficient KP do not consider the physical and chemical properties of the particle surface area. It is demonstrated by error propagation theory and Monte-Carlo simulations that parameter uncertainties from estimation methods for molecular properties and variations of indoor conditions might strongly influence the calculated distribution behavior of compounds in the indoor environment.

  8. The impact of reliable prebolus T 1 measurements or a fixed T 1 value in the assessment of glioma patients with dynamic contrast enhancing MRI.

    PubMed

    Tietze, Anna; Mouridsen, Kim; Mikkelsen, Irene Klærke

    2015-06-01

    Accurate quantification of hemodynamic parameters using dynamic contrast enhanced (DCE) MRI requires a measurement of tissue T 1 prior to contrast injection (T 1). We evaluate (i) T 1 estimation using the variable flip angle (VFA) and the saturation recovery (SR) techniques and (ii) investigate if accurate estimation of DCE parameters outperform a time-saving approach with a predefined T 1 value when differentiating high- from low-grade gliomas. The accuracy and precision of T 1 measurements, acquired by VFA and SR, were investigated by computer simulations and in glioma patients using an equivalence test (p > 0.05 showing significant difference). The permeability measure, K trans, cerebral blood flow (CBF), and - volume, V p, were calculated in 42 glioma patients, using fixed T 1 of 1500 ms or an individual T 1 measurement, using SR. The areas under the receiver operating characteristic curves (AUCs) were used as measures for accuracy to differentiate tumor grade. The T 1 values obtained by VFA showed larger variation compared to those obtained using SR both in the digital phantom and the human data (p > 0.05). Although a fixed T 1 introduced a bias into the DCE calculation, this had only minor impact on the accuracy differentiating high-grade from low-grade gliomas, (AUCfix = 0.906 and AUCind = 0.884 for K trans; AUCfix = 0.863 and AUCind = 0.856 for V p; p for AUC comparison > 0.05). T 1 measurements by VFA were less precise, and the SR method is preferable, when accurate parameter estimation is required. Semiquantitative DCE values, based on predefined T 1 values, were sufficient to perform tumor grading in our study.

  9. Reproducibility and relative stability in magnetic resonance imaging indices of tumor vascular physiology over a period of 24h in a rat 9L gliosarcoma model.

    PubMed

    Nagaraja, Tavarekere N; Elmghirbi, Rasha; Brown, Stephen L; Schultz, Lonni R; Lee, Ian Y; Keenan, Kelly A; Panda, Swayamprava; Cabral, Glauber; Mikkelsen, Tom; Ewing, James R

    2017-12-01

    The objective was to study temporal changes in tumor vascular physiological indices in a period of 24h in a 9L gliosarcoma rat model. Fischer-344 rats (N=14) were orthotopically implanted with 9L cells. At 2weeks post-implantation, they were imaged twice in a 24h interval using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). Data-driven model-selection-based analysis was used to segment tumor regions with varying vascular permeability characteristics. The region with the maximum number of estimable parameters of vascular kinetics was chosen for comparison across the two time points. It provided estimates of three parameters for an MR contrast agent (MRCA): i) plasma volume (v p ), ii) forward volumetric transfer constant (K trans ) and interstitial volume fraction (v e , ratio of K trans to reverse transfer constant, k ep ). In addition, MRCA extracellular distribution volume (V D ) was estimated in the tumor and its borders, along with tumor blood flow (TBF) and peritumoral MRCA flux. Descriptors of parametric distributions were compared between the two times. Tumor extent was examined by hematoxylin and eosin (H&E) staining. Picrosirus red staining of secreted collagen was performed as an additional index for 9L cells. Test-retest differences between population summaries for any parameter were not significant (paired t and Wilcoxon signed rank tests). Bland-Altman plots showed no apparent trends between the differences and averages of the test-retest measures for all indices. The intraclass correlation coefficients showed moderate to almost perfect reproducibility for all of the parameters, except v p . H&E staining showed tumor infiltration in parenchyma, perivascular space and white matter tracts. Collagen staining was observed along the outer edges of main tumor mass. The data suggest the relative stability of these MR indices of tumor microenvironment over a 24h duration in this gliosarcoma model. Copyright © 2017. Published by Elsevier Inc.

  10. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  11. An automation of design and modelling tasks in NX Siemens environment with original software - cost module

    NASA Astrophysics Data System (ADS)

    Zbiciak, R.; Grabowik, C.; Janik, W.

    2015-11-01

    The design-constructional process is a creation activity which strives to fulfil, as well as it possible at the certain moment of time, all demands and needs formulated by a user taking into account social, technical and technological advances. Engineer knowledge and skills and their inborn abilities have the greatest influence on the final product quality and cost. They have also deciding influence on product technical and economic value. Taking into account above it seems to be advisable to make software tools that support an engineer in the process of manufacturing cost estimation. The Cost module is built with analytical procedures which are used for relative manufacturing cost estimation. As in the case of the Generator module the Cost module was written in object programming language C# in Visual Studio environment. During the research the following eight factors, that have the greatest influence on overall manufacturing cost, were distinguished and defined: (i) a gear wheel teeth type it is straight or helicoidal, (ii) a gear wheel design shape A, B with or without wheel hub, (iii) a gear tooth module, (iv) teeth number, (v) gear rim width, (vi) gear wheel material, (vii) heat treatment or thermochemical treatment, (viii) accuracy class. Knowledge of parameters (i) to (v) is indispensable for proper modelling of 3D gear wheels models in CAD system environment. These parameters are also processed in the Cost module. The last three parameters it is (vi) to (viii) are exclusively used in the Cost module. The estimation of manufacturing relative cost is based on indexes calculated for each particular parameter. Estimated in this way the manufacturing relative cost gives an overview of design parameters influence on the final gear wheel manufacturing cost. This relative manufacturing cost takes values from 0.00 to 1,00 range. The bigger index value the higher relative manufacturing cost is. Verification whether the proposed algorithm of relative manufacturing costs estimation has been designed properly was made by comparison of the achieved from the algorithm results with those obtained from industry. This verification has indicated that in most cases both group of results are similar. Taking into account above it is possible to draw a conclusion that the Cost module might play significant role in design constructional process by adding an engineer at the selection stage of alternative gear wheels design. It should be remembered that real manufacturing cost can differ significantly according to available in a factory manufacturing techniques and stock of machine tools.

  12. Inter- and intra-species variability in heat resistance and the effect of heat treatment intensity on subsequent growth of Byssochlamys fulva and Byssochlamys nivea.

    PubMed

    Santos, Juliana L P; Samapundo, Simbarashe; Gülay, Sonay M; Van Impe, Jan; Sant'Ana, Anderson S; Devlieghere, Frank

    2018-04-21

    The major aims of this study were to assess inter- and intra-species variability of heat resistant moulds (HRMs), Byssochlamys fulva and Byssochlamys nivea, with regards to (i) heat resistance and (ii) effect of heat treatment intensity on subsequent outgrowth. Four-week-old ascospores were suspended in buffered glucose solution (13° Brix, pH 3.5) and heat treated in a thermal cycler adjusted at 85 °C, 90 °C and 93 °C. Two variants of the Weibull model were fitted to the survival data and the following inactivation parameters estimated: b (inactivation rate, min -1 ), n (curve shape) and δ (the time taken for first decimal reduction, min). In addition to the assessment of heat resistance, outgrowth of Byssochlamys sp. from ascospores heated at 70 °C, 75 °C, 80 °C, 85 °C and 90 °C for 10 min and at 93 °C for 30 and 70 s was determined at 22 °C for up to 30 days. The Baranyi and Roberts model was fitted to the growth data to estimate the radial growth rates (μ max , mm.day -1 ) and lag times (λ, days). Inter-species variability and significant differences (p < 0.05) were observed for both inactivation and growth estimated parameters among B. fulva and B. nivea strains. The effect of heat treatment intensity on outgrowth of B. fulva strains was more apparent at the most intense heat treatment evaluated (90 °C/10 min), which was also the condition in which greater dispersion of the estimated kinetic parameters was observed. On the other hand, B. nivea strains were more affected by heating, resulting in greater variability of growth parameters estimated at different heating intensities and in very long lag phases (up to 25 days). The results show that inter- and intra-species variability in the kinetic parameters of Byssochlamys sp. needs to be taken into account for more accurate spoilage prediction. Furthermore, the effect of thermal treatments on subsequent outgrowth from ascospores should be explored in combination with other relevant factors such as °Brix and oxygen to develop thermal processes and storage conditions which can prevent the growth of HRMs and spoilage of heat treated food products. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Tuning of the ionization potential of paddlewheel diruthenium(II, II) complexes with fluorine atoms on the benzoate ligands.

    PubMed

    Miyasaka, Hitoshi; Motokawa, Natsuko; Atsuumi, Ryo; Kamo, Hiromichi; Asai, Yuichiro; Yamashita, Masahiro

    2011-01-21

    A series of paddlewheel diruthenium(ii, ii) complexes with various fluorine-substituted benzoate ligands were isolated as THF adducts and structurally characterized: [Ru(2)(F(x)PhCO(2))(4)(THF)(2)] (F(x)PhCO(2)(-) = o-fluorobenzoate, o-F; m-fluorobenzoate, m-F; p-fluorobenzoate, p-F; 2,6-difluorobenzoate, 2,6-F(2); 3,4-difluorobenzoate, 3,4-F(2); 3,5-difluorobenzoate, 3,5-F(2); 2,3,4-trifluorobenzoate, 2,3,4-F(3); 2,3,6-trifluorobenzoate, 2,3,6-F(3); 2,4,5-trifluorobenzoate, 2,4,5-F(3); 2,4,6-trifluorobenzoate, 2,4,6-F(3); 3,4,5-trifluorobenzoate, 3,4,5-F(3); 2,3,4,5-tetrafluorobenzoate, 2,3,4,5-F(4); 2,3,5,6-tetrafluorobenzoate, 2,3,5,6-F(4); pentafluorobenzoate, F(5)). By adding fluorine atoms on the benzoate ligands, it was possible to tune the redox potential (E(1/2)) for [Ru(2)(II,II)]/[Ru(2)(II,III)](+) over a wide range of potentials from -40 mV to 350 mV (vs. Ag/Ag(+) in THF). 2,3,6-F(3), 2,3,4,5-F(4), 2,3,5,6-F(4) and F(5) were relatively air-stable compounds even though they are [Ru(2)(II,II)] species. The redox potential in THF was dependent on an electronic effect rather than on a structural (steric) effect of the o-F atoms, although more than one substituent in the m- and p-positions shifted E(1/2) to higher potentials in relation to the general Hammett equation. A quasi-Hammett parameter for an o-F atom (σ(o)) was estimated to be ∼0.2, and a plot of E(1/2)vs. a sum of Hammett parameters including σ(o) was linear. In addition, the HOMO energy levels, which was calculated based on atomic coordinates of solid-state structures, as well as the redox potential were affected by adding F atoms. Nevertheless, a steric contribution stabilizing their static structures in the solid state was present in addition to the electronic effect. On the basis of the electronic effect, the redox potential of these complexes is correlated to the HOMO energy level, and the electronic effect of F atoms is the main factor controlling the ionization potential of the complexes with ligands free from the rotational constraint, i.e. complexes in solution.

  14. First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.

    2013-08-01

    We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.

  15. Can we calibrate simultaneously groundwater recharge and aquifer hydrodynamic parameters ?

    NASA Astrophysics Data System (ADS)

    Hassane Maina, Fadji; Ackerer, Philippe; Bildstein, Olivier

    2017-04-01

    By groundwater model calibration, we consider here fitting the measured piezometric heads by estimating the hydrodynamic parameters (storage term and hydraulic conductivity) and the recharge. It is traditionally recommended to avoid simultaneous calibration of groundwater recharge and flow parameters because of correlation between recharge and the flow parameters. From a physical point of view, little recharge associated with low hydraulic conductivity can provide very similar piezometric changes than higher recharge and higher hydraulic conductivity. If this correlation is true under steady state conditions, we assume that this correlation is much weaker under transient conditions because recharge varies in time and the parameters do not. Moreover, the recharge is negligible during summer time for many climatic conditions due to reduced precipitation, increased evaporation and transpiration by vegetation cover. We analyze our hypothesis through global sensitivity analysis (GSA) in conjunction with the polynomial chaos expansion (PCE) methodology. We perform GSA by calculating the Sobol indices, which provide a variance-based 'measure' of the effects of uncertain parameters (storage and hydraulic conductivity) and recharge on the piezometric heads computed by the flow model. The choice of PCE has the following two benefits: (i) it provides the global sensitivity indices in a straightforward manner, and (ii) PCE can serve as a surrogate model for the calibration of parameters. The coefficients of the PCE are computed by probabilistic collocation. We perform the GSA on simplified real conditions coming from an already built groundwater model dedicated to a subdomain of the Upper-Rhine aquifer (geometry, boundary conditions, climatic data). GSA shows that the simultaneous calibration of recharge and flow parameters is possible if the calibration is performed over at least one year. It provides also the valuable information of the sensitivity versus time, depending on the aquifer inertia and climatic conditions. The groundwater levels variations during recharge (increase) are sensitive to the storage coefficient whereas the groundwater levels variations after recharge (decrease) are sensitive to the hydraulic conductivity. The performed model calibration on synthetic data sets shows that the parameters and recharge are estimated quite accurately.

  16. Solar magnetic field studies using the 12 micron emission lines. II - Stokes profiles and vector field samples in sunspots

    NASA Technical Reports Server (NTRS)

    Hewagama, Tilak; Deming, Drake; Jennings, Donald E.; Osherovich, Vladimir; Wiedemann, Gunter; Zipoy, David; Mickey, Donald L.; Garcia, Howard

    1993-01-01

    Polarimetric observations at 12 microns of two sunpots are reported. The horizontal distribution of parameters such as magnetic field strength, inclination, azimuth, and magnetic field filling factors are presented along with information about the height dependence of the magnetic field strength. Comparisons with contemporary magnetostatic sunspot models are made. The magnetic data are used to estimate the height of 12 micron line formation. From the data, it is concluded that within a stable sunspot there are no regions that are magnetically filamentary, in the sense of containing both strong-field and field-free regions.

  17. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  18. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  19. Exponential Models of Legislative Turnover. [and] The Dynamics of Political Mobilization, I: A Model of the Mobilization Process, II: Deductive Consequences and Empirical Application of the Model. Applications of Calculus to American Politics. [and] Public Support for Presidents. Applications of Algebra to American Politics. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 296-300.

    ERIC Educational Resources Information Center

    Casstevens, Thomas W.; And Others

    This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…

  20. Analysing neutron scattering data using McStas virtual experiments

    NASA Astrophysics Data System (ADS)

    Udby, L.; Willendrup, P. K.; Knudsen, E.; Niedermayer, Ch.; Filges, U.; Christensen, N. B.; Farhi, E.; Wells, B. O.; Lefmann, K.

    2011-04-01

    With the intention of developing a new data analysis method using virtual experiments we have built a detailed virtual model of the cold triple-axis spectrometer RITA-II at PSI, Switzerland, using the McStas neutron ray-tracing package. The parameters characterising the virtual instrument were carefully tuned against real experiments. In the present paper we show that virtual experiments reproduce experimentally observed linewidths within 1-3% for a variety of samples. Furthermore we show that the detailed knowledge of the instrumental resolution found from virtual experiments, including sample mosaicity, can be used for quantitative estimates of linewidth broadening resulting from, e.g., finite domain sizes in single-crystal samples.

  1. Fertility and Childlessness in the United States.

    PubMed

    Baudin, Thomas; de la Croix, David

    2015-06-01

    We develop a theory of fertility, distinguishing its intensive margin from its extensive margin. The deep parameters are identified using facts from the 1990 US Census: (i) fertility of mothers decreases with education; (ii) childlessness exhibits a U-shaped relationship with education; (iii) the relationship between marriage rates and education is hump-shaped for women and increasing for men. We estimate that 2.5 percent of women were childless because of poverty and 8.1 percent because of high opportunity cost of childrearing. Over time, historical trends in total factor productivity and in education led to a U-shaped response in childlessness rates while fertility of mothers decreased.

  2. Biological and analytical variations of 16 parameters related to coagulation screening tests and the activity of coagulation factors.

    PubMed

    Chen, Qian; Shou, Weiling; Wu, Wei; Guo, Ye; Zhang, Yujuan; Huang, Chunmei; Cui, Wei

    2015-04-01

    To accurately estimate longitudinal changes in individuals, it is important to take into consideration the biological variability of the measurement. The few studies available on the biological variations of coagulation parameters are mostly outdated. We confirmed the published results using modern, fully automated methods. Furthermore, we added data for additional coagulation parameters. At 8:00 am, 12:00 pm, and 4:00 pm on days 1, 3, and 5, venous blood was collected from 31 healthy volunteers. A total of 16 parameters related to coagulation screening tests as well as the activity of coagulation factors were analyzed; these included prothrombin time, fibrinogen (Fbg), activated partial thromboplastin time, thrombin time, international normalized ratio, prothrombin time activity, activated partial thromboplastin time ratio, fibrin(-ogen) degradation products, as well as the activity of factor II, factor V, factor VII, factor VIII, factor IX, and factor X. All intraindividual coefficients of variation (CVI) values for the parameters of the screening tests (except Fbg) were less than 5%. Conversely, the CVI values for the activity of coagulation factors were all greater than 5%. In addition, we calculated the reference change value to determine whether a significant difference exists between two test results from the same individual. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  3. Radiological Characterization Methodology of INEEL Stored RH-TRU Waste from ANL-E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajiv N. Bhatt

    2003-02-01

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using this methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  4. Radiological Characterization Methodology for INEEL-Stored Remote-Handled Transuranic (RH TRU) Waste from Argonne National Laboratory-East

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuan, P.; Bhatt, R.N.

    2003-01-14

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using the methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  5. Generalized analytical model for benthic water flux forced by surface gravity waves

    USGS Publications Warehouse

    King, J.N.; Mehta, A.J.; Dean, R.G.

    2009-01-01

    A generalized analytical model for benthic water flux forced by linear surface gravity waves over a series of layered hydrogeologic units is developed by adapting a previous solution for a hydrogeologic unit with an infinite thickness (Case I) to a unit with a finite thickness (Case II) and to a dual-unit system (Case III). The model compares favorably with laboratory observations. The amplitude of wave-forced benthic water flux is shown to be directly proportional to the amplitude of the wave, the permeability of the hydrogeologic unit, and the wave number and inversely proportional to the kinematic viscosity of water. A dimensionless amplitude parameter is introduced and shown to reach a maximum where the product of water depth and the wave number is 1.2. Submarine groundwater discharge (SGD) is a benthic water discharge flux to a marine water body. The Case I model estimates an 11.5-cm/d SGD forced by a wave with a 1 s period and 5-cm amplitude in water that is 0.5-m deep. As this wave propagates into a region with a 0.3-m-thick hydrogeologic unit, with a no-flow bottom boundary, the Case II model estimates a 9.7-cm/d wave-forced SGD. As this wave propagates into a region with a 0.2-m-thick hydrogeologic unit over an infinitely thick, more permeable unit, the Case III quasi-confined model estimates a 15.7-cm/d wave-forced SGD. The quasi-confined model has benthic constituent flux implications in coral reef, karst, and clastic regions. Waves may undermine tracer and seepage meter estimates of SGD at some locations. Copyright 2009 by the American Geophysical Union.

  6. Is 'gut feeling' by medical staff better than validated scores in estimation of mortality in a medical intensive care unit? - The prospective FEELING-ON-ICU study.

    PubMed

    Radtke, Anne; Pfister, Roman; Kuhr, Kathrin; Kochanek, Matthias; Michels, Guido

    2017-10-01

    The aim of the FEELING-ON-ICU study was to compare mortality estimations of critically ill patients based on 'gut feeling' of medical staff and by Acute Physiology And Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score (SAPS) II and Sequential Organ Failure Assessment (SOFA). Medical staff estimated patients' mortality risks via questionnaires. APACHE II, SAPS II and SOFA were calculated retrospectively from records. Estimations were compared with actual in-hospital mortality using receiver operating characteristic (ROC) curves and the area under the ROC curve (AUC). 66 critically ill patients (60.6% male, mean age 63±15years (range 30-86)) were evaluated each by a nurse (n=66, male 32.4%) and a physician (n=66, male 67.6%). 15 (22.7%) patients died on the intensive care unit. AUC was largest for estimations by physicians (AUC 0.814 (95% CI 0.705-0.923)), followed by SOFA (AUC 0.749 (95% CI 0.629-0.868)), SAPS II (AUC 0.723 (95% CI 0.597-0.849)), APACHE II (AUC 0.721 (95% CI 0.595-0.847)) and nursing staff (AUC 0.669 (95% CI 0.529-0.810)) (p<0.05 for all results). The concept of physicians' 'gut feeling' was comparable to classical objective scores in mortality estimations of critically ill patients. Concerning practicability physicians' evaluations were advantageous to complex score calculation. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Preparation, spectroscopic, thermal, antihepatotoxicity, hematological parameters and liver antioxidant capacity characterizations of Cd(II), Hg(II), and Pb(II) mononuclear complexes of paracetamol anti-inflammatory drug

    NASA Astrophysics Data System (ADS)

    El-Megharbel, Samy M.; Hamza, Reham Z.; Refat, Moamen S.

    2014-10-01

    Keeping in view that some metal complexes are found to be more potent than their parent drugs, therefore, our present paper aimed to synthesized Cd(II), Hg(II) and Pb(II) complexes of paracetamol (Para) anti-inflammatory drug. Paracetamol complexes with general formula [M(Para)2(H2O)2]·nH2O have been synthesized and characterized on the basis of elemental analysis, conductivity, IR and thermal (TG/DTG), 1H NMR, electronic spectral studies. The conductivity data of these complexes have non-electrolytic nature. Comparative antimicrobial (bacteria and fungi) behaviors and molecular weights of paracetamol with their complexes have been studied. In vivo the antihepatotoxicity effect and some liver function parameters levels (serum total protein, ALT, AST, and LDH) were measured. Hematological parameters and liver antioxidant capacities of both Para and their complexes were performed. The Cd2+ + Para complex was recorded amelioration of antioxidant capacities in liver homogenates compared to other Para complexes treated groups.

  8. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  9. Mechanisms for Reduction of Natural Waters Technogenic Pollution by Metals due to Complexions with Humus Substances (Zoning: Western Siberia and the European Territory of Russia)

    NASA Astrophysics Data System (ADS)

    Dinu, M. I.

    2017-11-01

    The article described the complexation of metal ions with humus substances in natural waters (small lakes). Humus substances as the major biochemical components of natural water have a significant impact on the forms and migration of metals and the toxicity of natural objects. This article presents the results of large-scale chemical experiments: the study of the structural features (zonal aspects) of humus substances extracted from soil and water natural climatic zones (more than 300 objects) in Russia (European Russia and West Siberia); the influence of structural features on the physic-chemical parameters of humus acids and, in particular, on their complexing ability. The functional specifics of humus matter extracted from soils is estimated using spectrometric techniques. The conditional stability constants for Fe(III), Cu(II), Pb(II), Cd(II), Zn(II), Ni(II), Co(II), Mn(II), Cr(III), Ca(II), Mg(II), Sr(II), and Al(III) are experimentally determined with the electrochemical, spectroscopic analysis methods. The activities of metals are classified according to their affinity to humus compounds in soils and water. The determined conditional stability constants of the complexes are tested by model experiments, and it is demonstrated that Fe and Al ions have higher conditional stability constants than the ions of alkali earth metals, Pb, Cu, and Zn. Furthermore, the influence of aluminium ions and iron on the complexation of copper and lead as well as the influence of lead and copper on complexation of cobalt and nickel have been identified. The metal forms in a large number of lakes are calculated basing on the experiments’ results. The main chemical mechanisms of the distribution of metals by forms in the water of the lakes in European Russia and West Siberia are described.

  10. Comparative absorption spectroscopy involving 4f-4f transitions to explore the kinetics of simultaneous coordination of uracil with Nd(III) and Zn(II) and its associated thermodynamics

    NASA Astrophysics Data System (ADS)

    Victory Devi, Ch.; Rajmuhon Singh, N.

    2011-10-01

    The interaction of uracil with Nd(III) has been explored in presence and absence of Zn(II) using the comparative absorption spectroscopy involving the 4f-4f transitions in different solvents. The complexation of uracil with Nd(III) is indicated by the change in intensity of 4f-4f bands expressing in terms of significant change in oscillator strength and Judd-Ofelt parameters. Intensification of this bands became more prominent in presence of Zn(II) suggesting the stimulative effect of Zn(II) towards the complexation of Nd(III) with uracil. Other spectral parameters namely Slator-Condon ( Fk's), nephelauxetic effect ( β), bonding ( b1/2) and percent covalency ( δ) parameters are computed to correlate their simultaneous binding of metal ions with uracil. The sensitivities of the observed 4f-4f transitions towards the minor coordination changes around Nd(III) has been used to monitor the simultaneous coordination of uracil with Nd(III) and Zn(II). The variation of intensities (oscillator strengths and Judd-Ofelt parameters) of 4f-4f bands during the complexation has helped in following the heterobimetallic complexation of uracil. Rate of complexation with respect to hypersensitive transition was evaluated. Energy of activation and thermodynamic parameters for the complexation reaction were also determined.

  11. Diet History Questionnaire II FAQs | EGRP/DCCPS/NCI/NIH

    Cancer.gov

    Answers to general questions about the Diet History Questionnaire II (DHQ II), as well as those related to DHQ II administration, validation, scanning, nutrient estimates, calculations, DHQ II modification, data quality, and more.

  12. Nickel(II) biosorption by Rhodotorula glutinis.

    PubMed

    Suazo-Madrid, Alicia; Morales-Barrera, Liliana; Aranda-García, Erick; Cristiani-Urbina, Eliseo

    2011-01-01

    The present study reports the feasibility of using Rhodotorula glutinis biomass as an alternative low-cost biosorbent to remove Ni(II) ions from aqueous solutions. Acetone-pretreated R. glutinis cells showed higher Ni(II) biosorption capacity than untreated cells at pH values ranging from 3 to 7.5, with an optimum pH of 7.5. The effects of other relevant environmental parameters, such as initial Ni(II) concentration, shaking contact time and temperature, on Ni(II) biosorption onto acetone-pretreated R. glutinis were evaluated. Significant enhancement of Ni(II) biosorption capacity was observed by increasing initial metal concentration and temperature. Kinetic studies showed that the kinetic data were best described by a pseudo-second-order kinetic model. Among the two-, three-, and four-parameter isotherm models tested, the Fritz-Schluender model exhibited the best fit to experimental data. Thermodynamic parameters (activation energy, and changes in activation enthalpy, activation entropy, and free energy of activation) revealed that the biosorption of Ni(II) ions onto acetone-pretreated R. glutinis biomass is an endothermic and non-spontaneous process, involving chemical sorption with weak interactions between the biosorbent and Ni(II) ions. The high sorption capacity (44.45 mg g(-1) at 25°C, and 63.53 mg g(-1) at 70°C) exhibited by acetone-pretreated R. glutinis biomass places this biosorbent among the best adsorbents currently available for removal of Ni(II) ions from aqueous effluents.

  13. Spectral characterization, thermal and biological activity studies of Schiff base complexes derived from 4,4‧-Methylenedianiline, ethanol amine and benzil

    NASA Astrophysics Data System (ADS)

    Emam, Sanaa Moustafa

    2017-04-01

    Some new metal(II) complexes of asymmetric Schiff base ligand were prepared by template technique. The shaped complexes are in binuclear structures and were explained through elemental analysis, molar conductivity, various spectroscopic methods (IR, U.V-Vis, XRD, ESR), thermal (TG) and magnetic moment measurements. The IR spectra were done demonstrating that the Schiff base ligand acts as neutral tetradentate moiety in all metal complexes. The electronic absorption spectra represented octahedral geometry for all complexes, while, the ESR spectra for Cu(II) complex showed axially symmetric g-tensor parameter with g׀׀ > g⊥ > 2.0023 indicating to 2B1g ground state with (dx2-y2)1 configuration. The nature of the solid residue created from TG estimations was affirmed utilizing IR and XRD spectra. The biological activity of the prepared complexes was studied against Land Snails. Additionally, the in vitro antitumor activity of the synthesized complexes with Hepatocellular Carcinoma cell (Hep-G2) was examined. It was observed that Zn(II) complex (5), exhibits a high inhibition of growth of the cell line with IC50 of 7.09 μg/mL.

  14. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  15. Effect of hindlimb unloading on stereological parameters of the motor cortex and hippocampus in male rats.

    PubMed

    Salehi, Mohammad Saied; Mirzaii-Dizgah, Iraj; Vasaghi-Gharamaleki, Behnoosh; Zamiri, Mohammad Javad

    2016-11-09

    Hindlimb unloading (HU) can cause motion and cognition dysfunction, although its cellular and molecular mechanisms are not well understood. The aim of the present study was to determine the stereological parameters of the brain areas involved in motion (motor cortex) and spatial learning - memory (hippocampus) under an HU condition. Sixteen adult male rats, kept under a 12 : 12 h light-dark cycle, were divided into two groups of freely moving (n=8) and HU (n=8) rats. The volume of motor cortex and hippocampus, the numerical cell density of neurons in layers I, II-III, V, and VI of the motor cortex, the entire motor cortex as well as the primary motor cortex, and the numerical density of the CA1, CA3, and dentate gyrus subregions of the hippocampus were estimated. No significant differences were observed in the evaluated parameters. Our results thus indicated that motor cortical and hippocampal atrophy and cell loss may not necessarily be involved in the motion and spatial learning memory impairment in the rat.

  16. Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei

    2018-05-01

    We extend our MCMC sampler of 3D EoR simulations, 21CMMC, to perform parameter estimation directly on light-cones of the cosmic 21cm signal. This brings theoretical analysis one step closer to matching the expected 21-cm signal from next generation interferometers like HERA and the SKA. Using the light-cone version of 21CMMC, we quantify biases in the recovered astrophysical parameters obtained from the 21cm power spectrum when using the co-eval approximation to fit a mock 3D light-cone observation. While ignoring the light-cone effect does not bias the parameters under most assumptions, it can still underestimate their uncertainties. However, significant biases (~few - 10 σ) are possible if all of the following conditions are met: (i) foreground removal is very efficient, allowing large physical scales (k ~ 0.1 Mpc-1) to be used in the analysis; (ii) theoretical modelling is accurate to ~10 per cent in the power spectrum amplitude; and (iii) the 21cm signal evolves rapidly (i.e. the epochs of reionisation and heating overlap significantly

  17. Interception loss, throughfall and stemflow in a maritime pine stand. II. An application of Gash's analytical model of interception

    NASA Astrophysics Data System (ADS)

    Loustau, D.; Berbigier, P.; Granier, A.

    1992-10-01

    Interception, throughfall and stemflow were determined in an 18-year-old maritime pine stand for a period of 30 months. This involved 71 rainfall events, each corresponding either to a single storm or to several storms. Gash's analytical model of interception was used to estimate the sensitivity of interception to canopy structure and climatic parameters. The seasonal cumulative interception loss corresponded to 12.6-21.0% of the amount of rainfall, whereas throughfall and stemflow accounted for 77-83% and 1-6%, respectively. On a seasonal basis, simulated data fitted the measured data satisfactorily ( r2 = 0.75). The rainfall partitioning between interception, throughfall and stemflow was shown to be sensitive to (1) the rainfall regime, i.e. the relative importance of light storms to total rainfall, (2) the climatic parameters, rainfall rate and average evaporation rate during storms, and (3) the canopy structure parameters of the model. The low interception rate of the canopy was attributed primarily to the low leaf area index of the stand.

  18. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  19. A Whole-Body Physiologically Based Pharmacokinetic Model for Colistin and Colistin methanesulfonate (CMS) in Rat.

    PubMed

    Bouchene, Salim; Marchand, Sandrine; Couet, William; Friberg, Lena E; Gobin, Patrice; Lamarche, Isabelle; Grégoire, Nicolas; Björkman, Sven; Karlsson, Mats O

    2018-04-17

    Colistin is a polymyxin antibiotic used to treat patients infected with multidrug-resistant Gram negative bacteria (MDR-GNB). The objective of this work was to develop a whole-body physiologically based pharmacokinetic (WB-PBPK) model to predict tissue distribution of colistin in rat. The distribution of a drug in a tissue is commonly characterized by its tissue-to-plasma partition coefficient, K p . Colistin and its prodrug, colistin methanesulfonate (CMS) K p priors were measured experimentally from rat tissue homogenates or predicted in silico. The PK parameters of both compounds were estimated fitting in vivo their plasma concentration-time profiles from six rats receiving an i.v. bolus of CMS. The variability in the data was quantified by applying a non-linear mixed effect (NLME) modelling approach. A WB-PBPK model was developed assuming a well-stirred and perfusion-limited distribution in tissue compartments. Prior information on tissue distribution of colistin and CMS was investigated following three scenarios: K p were estimated using in silico K p priors (I) or K p were estimated using experimental K p priors (II) or K p were fixed to the experimental values (III). The WB-PBPK model best described colistun and CMS plasma concentration-time profiles in scenario II. Colistin predicted concentrations in kidneys in scenario II were higher than in other tissues, which was consistent with its large experimental K p prior. This might be explained by a high affinity of colistin for renal parenchyma and active reabsorption into the proximal tubular cells. In contrast, renal accumulation of colistin was not predicted in scenario I. Colistin and CMS clearance estimates were in agreement with published values. The developed model suggests using experimental priors over in silico K p priors for kidneys to provide a better prediction of colistin renal distribution. Such models might serve in drug development for interspecies scaling and investigating the impact of disease state on colistin disposition. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  1. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  2. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model

    PubMed Central

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo

    2017-01-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554

  3. Quantitative estimation of antioxidant therapy efficiency in diabetes mellitus patients

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Ishunina, Angela M.; Ovsyannickov, Konstantin V.; Strokov, Igor A.

    2000-11-01

    The aim of this work was to find out to which degree Tanakan affects the microcirculation parameters and the malonic dialdehyde level as a parameter of intense lipid peroxidation in insulin-independent diabetes patients with different disease durations. We used computerized capillaroscope GY-0.04 designed by the Centre for Analysis of Substances, Russia for the non-invasive measurement of capillary blood velocity as well as the size of the perivascular zone and density of blood aggregates and lipid inclusions. The microcirculation parameters were studied in two groups of insulin-independent diabetes patients. The basic group included 58 patients (61+/-9,0 years, disease duration 14,7+/-7,8 years). The patients had late diabetic complications as retinopathy and nephrophathy, neuropathy, confirmed by clinical and tool investigation. In this group we also studied the level of serum malonic dialdehyde, as a parameter of intense lipid peroxidation. The reference group included 31 patients (57+/-1,3 years, disease duration 3,6+/-0,6 years) with minimum diabetic complication. We show that Tanakan in daily dosage 120 mg for 2 months reduces the malonic dialdehyde level in the blood serum and the erythrocyte membranes of type II diabetes patients and improves the microcirculation parameters. There are correspondences between the density of lipid inclusions as determined with computerized capillaroscopy and the lipid exchange parameters as determined using a routing blood test. Thus, noninvasive blood lipid quantification is feasible and reliable.

  4. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  5. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  6. Demonstration of extrapulmonary activity of angiotensin converting enzyme in intact tissue preparations.

    PubMed Central

    Lembeck, F.; Griesbacher, T.; Eckhardt, M.

    1990-01-01

    1. The activity of angiotensin converting enzyme (ACE) has been studied on functional parameters of intact isolated preparations of extrapulmonary tissues. The conversion of angiotensin I (A I) to angiotensin II (A II) and the cleavage of bradykinin (BK) were used as indicators of ACE activity. Captopril was employed as a specific inhibitor of ACE. 2. Captopril augmented the BK-induced contractions of the rat isolated uterus, the BK- and substance P-induced contractions of the guinea-pig ileum, and the BK-induced venoconstriction in the isolated perfused ear of the rabbit. Degradation of BK by ACE was calculated to be 52% in the rat uterus and 75% in the rabbit perfused ear. 3. Captopril inhibited the A I-induced contractions of the rat isolated colon, the A I-induced vasoconstriction in the isolated perfused ear of the rabbit and the rise in blood pressure induced by i.a. injections of A I in pithed rats. Conversion of A I to A II was calculated to be 13% in the rat colon and 26% in the rabbit perfused ear. 4. From estimations of the A II activity (bioassay on the rat colon) in the effluent of the perfused ear of the rabbit after injections of A I into the arterial inflow cannula it was calculated that approximately one tenth of A I was converted to A II during a single passage through the ear (less than 15 s).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2164861

  7. Consumption and diffusion of dissolved oxygen in sedimentary rocks.

    PubMed

    Manaka, M; Takeda, M

    2016-10-01

    Fe(II)-bearing minerals (e.g., biotite, chlorite, and pyrite) are a promising reducing agent for the consumption of atmospheric oxygen in repositories for the geological disposal of high-level radioactive waste. To estimate effective diffusion coefficients (D e , in m 2 s -1 ) for dissolved oxygen (DO) and the reaction rates for the oxidation of Fe(II)-bearing minerals in a repository environment, we conducted diffusion-chemical reaction experiments using intact rock samples of Mizunami sedimentary rock. In addition, we conducted batch experiments on the oxidation of crushed sedimentary rock by DO in a closed system. From the results of the diffusion-chemical reaction experiments, we estimated the values of D e for DO to lie within the range 2.69×10 -11

  8. THE DETECTION RATE OF EARLY UV EMISSION FROM SUPERNOVAE: A DEDICATED GALEX/PTF SURVEY AND CALIBRATED THEORETICAL ESTIMATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find thatmore » our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.« less

  9. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  10. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  11. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  12. Improved Estimates of Thermodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  13. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  14. Estimates of general combining ability in Hevea breeding at the Rubber Research Institute of Malaysia : I. Phases II and III A.

    PubMed

    Tan, H

    1977-01-01

    Estimates of general combining ability of parents for yield and girth obtained separately from seedlings and their corresponding clonal families in Phases II and IIIA of the RRIM breeding programme are compared. A highly significant positive correlation (r = 0.71***) is found between GCA estimates from seedling and clonal families for yield in Phase IIIA, but not in Phase II (r = -0.03(NS)) nor for girth (r= -0.27(NS)) in Phase IIIA. The correlations for Phase II yield and Phase IIIA girth, however, improve when the GCA estimates based on small sample size or reversed rankings are excluded.When the best selections (based on present clonal and seedling information) are compared, all five of the parents top-ranking for yield are common in Phase IIIA but only two parents are common for yield and girth in Phases II and IIIA respectively. However, only one parent for yield in Phase II and two parents for girth in Phase IIIA would, if selected on clonal performance, have been omitted from the top ranking selections made by previous workers using seedling information.These findings, therefore, justify the choice of parents based on GCA estimates for yield obtained from seedling performance. Similar justification cannot be offered for girth, for which analysis is confounded by uninterpretable site and seasonal effects.

  15. Comparative evaluation of hemodynamic and respiratory parameters during mechanical ventilation with two tidal volumes calculated by demi-span based height and measured height in normal lungs

    PubMed Central

    Seresht, L. Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad

    2014-01-01

    Background: Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). Materials and Methods: This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Results: Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Conclusions: Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation. PMID:24627845

  16. Comparative evaluation of hemodynamic and respiratory parameters during mechanical ventilation with two tidal volumes calculated by demi-span based height and measured height in normal lungs.

    PubMed

    Seresht, L Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad

    2014-01-01

    Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation.

  17. Aerobic and Anaerobic Swimming Force Evaluation in One Single Test Session for Young Swimmers.

    PubMed

    de Barros Sousa, Filipe Antônio; Rodrigues, Natalia Almeida; Messias, Leonardo Henrique Dalcheco; Queiroz, Jair Borges; Manchado-Gobatto, Fulvia Barros; Gobatto, Claudio Alexandre

    2017-05-01

    This study aims to propose and validate the tethered swimming lactate minimum test (TSLacmin) estimating aerobic and anaerobic capacity in one single test session, using force as measurement parameter. 6 male and 6 female young swimmers (age=15.7±1.1 years; height=173.3±9.5 cm; weight=66.1±9.5 kg) performed 4 sessions comprising i) an all-out 30 s test and incremental test (TSLacmin); ii) 30 min of tethered swimming at constant intensity (2 sessions); iii) free-swimming time trials used to calculate critical velocity. Tethered swimming sessions used an acquisition system enabling maximum (Fmax) and mean (Fmean) force measurement and intensity variation. The tethered all-out test lasting 30 s resulted in hyperlactatemia of 7.9±2.0 mmol·l -1 . TSLacmin presented a 100% success applicability rate, which is equivalent to aerobic capacity in 75% of cases. TSLacmin intensity was 37.7±7.3 N, while maximum force in the all-out test was 105±27 N. Aerobic and anaerobic TSLacmin parameters were significantly related to free-swimming performance (r=-0.67 for 100 m and r=-0.80 for 200 m) and critical velocity (r=0.80). TSLacmin estimates aerobic capacity in most cases, and both aerobic and anaerobic force parameters are well related to critical velocity and free swimming performance. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Probability distributions of the electroencephalogram envelope of preterm infants.

    PubMed

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Electron detachment energies in high-symmetry alkali halide solvated-electron anions

    NASA Astrophysics Data System (ADS)

    Anusiewicz, Iwona; Berdys, Joanna; Simons, Jack; Skurski, Piotr

    2003-07-01

    We decompose the vertical electron detachment energies (VDEs) in solvated-electron clusters of alkali halides in terms of (i) an electrostatic contribution that correlates with the dipole moment (μ) of the individual alkali halide molecule and (ii) a relaxation component that is related to the polarizability (α) of the alkali halide molecule. Detailed numerical ab initio results for twelve species (MX)n- (M=Li,Na; X=F,Cl,Br; n=2,3) are used to construct an interpolation model that relates the clusters' VDEs to their μ and α values as well as a cluster size parameter r that we show is closely related to the alkali cation's ionic radius. The interpolation formula is then tested by applying it to predict the VDEs of four systems [i.e., (KF)2-, (KF)3-, (KCl)2-, and (KCl)3-] that were not used in determining the parameters of the model. The average difference between the model's predicted VDEs and the ab initio calculated electron binding energies is less than 4% (for the twelve species studied). It is concluded that one can easily estimate the VDE of a given high-symmetry solvated electron system by employing the model put forth here if the α, μ and cation ionic radii are known. Alternatively, if VDEs are measured for an alkali halide cluster and the α and μ values are known, one can estimate the r parameter, which, in turn, determines the "size" of the cluster anion.

  20. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  1. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    NASA Astrophysics Data System (ADS)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  2. Carbon footprint estimator, phase II : volume II - technical appendices.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yueyang; Deng Licai; Liu Chao

    A total of {approx}640, 000 objects from the LAMOST pilot survey have been publicly released. In this work, we present a catalog of DA white dwarfs (DAWDs) from the entire pilot survey. We outline a new algorithm for the selection of white dwarfs (WDs) by fitting Sersic profiles to the Balmer H{beta}, H{gamma}, and H{delta} lines of the spectra, and calculating the equivalent width of the Ca II K line. Two thousand nine hundred sixty-four candidates are selected by constraining the fitting parameters and the equivalent width of the Ca II K line. All the spectra of candidates are visuallymore » inspected. We identify 230 DAWDs (59 of which are already included in the Villanova and SDSS WD catalogs), 20 of which are DAWDs with non-degenerate companions. In addition, 128 candidates are classified as DAWDs/subdwarfs, which means the classifications are ambiguous. The result is consistent with the expected DAWD number estimated based on the LEGUE target selection algorithm.« less

  4. Cardioprotective activity of flax lignan concentrate extracted from seeds of Linum usitatissimum in isoprenalin induced myocardial necrosis in rats

    PubMed Central

    Zanwar, Anand A.; Hegde, Mahabaleshwar V.; Bodhankar, Subhash L.

    2011-01-01

    The objective of the study was to evaluate the cardioprotective activity of flax lignan concentrate (FLC) in isoprenalin (ISO) induced cardiotoxicity in rats. Male Wistar rats (200–230 g) were divided into three groups. Group I: control, Group II: isoprenalin, Group III: FLC (500 mg/kg, p.o.) orally for 8 days and in group II and III isoprenalin 5.25 mg/kg, s.c. on day 9 and 8.5 mg/kg on day 10. On day 10 estimation of marker enzymes in serum and haemodynamic parameters were recorded. Animals were sacrificed, histology of heart was performed. Isoprenalin showed cardiotoxicity, manifested by increased levels of marker enzymes and increased heart rate. FLC treatment reversed these biochemical changes significantly compared with ISO group. The cardiotoxic effect of isoprenalin was less in FLC pretreated animals, which was confirmed in histopathological alterations. Haemodynamic, biochemical alteration and histopathological results suggest a cardioprotective protective effect of FLC in isoprenalin induced cardiotoxicity. PMID:21753905

  5. MT Ser, a binary blue subdwarf

    NASA Astrophysics Data System (ADS)

    Shimanskii, V. V.; Borisov, N. V.; Sakhibullin, N. A.; Sheveleva, D. V.

    2008-06-01

    We have classified and determined the parameters of the evolved close binary MT Ser. Our moderate-resolution spectra covering various phases of the orbital period were taken with the 6-m telescope of the Special Astrophysical Observatory. The spectra of MT Ser freed from the contribution of the surrounding nebula Abell 41 contained no emission lines due to the reflection effect. The radial velocities measured from lines of different elements showed them to be constant on a time scale corresponding to the orbital period. At the same time, we find effects of broadening for the HeII absorption lines, due to the orbital motion of two hot stars of similar types. As a result, we classify MT Ser as a system with two blue subdwarfs after the common-envelope stage. We estimate the component masses and the distance to the object from the Doppler broadening of the HeII lines. We demonstrate that the age of the ambient nebula, Abell 41, is about 35 000 years.

  6. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter

    PubMed Central

    Reddy, Chinthala P.; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956

  7. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter.

    PubMed

    Reddy, Chinthala P; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

  8. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  9. Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine

    2002-01-01

    The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.

  10. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  11. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  12. Multi-wavelength campaign on NGC 7469. II. Column densities and variability in the X-ray spectrum

    NASA Astrophysics Data System (ADS)

    Peretz, U.; Behar, E.; Kriss, G. A.; Kaastra, J.; Arav, N.; Bianchi, S.; Branduardi-Raymont, G.; Cappi, M.; Costantini, E.; De Marco, B.; Di Gesu, L.; Ebrero, J.; Kaspi, S.; Mehdipour, M.; Middei, R.; Paltani, S.; Petrucci, P. O.; Ponti, G.; Ursini, F.

    2018-01-01

    We have investigated the ionic column density variability of the ionized outflows associated with NGC 7469, to estimate their location and power. This could allow a better understanding of galactic feedback of AGNs to their host galaxies. Analysis of seven XMM-Newton grating observations from 2015 is reported. We used an individual-ion spectral fitting approach, and compared different epochs to accurately determine variability on timescales of years, months, and days. We find no significant column density variability in a ten-year period implying that the outflow is far from the ionizing source. The implied lower bound on the ionization equilibrium time, ten years, constrains the lower limit on the distance to be at least 12 pc, and up to 31 pc, much less but consistent with the 1 kpc wide starburst ring. The ionization distribution of column density is reconstructed from measured column densities, nicely matching results of two 2004 observations, with one large high ionization parameter (ξ) component at 2 < log ξ< 3.5, and one at 0.5 < log ξ< 1 in cgs units. The strong dependence of the expression for kinetic power, ∝ 1 /ξ, hampers tight constraints on the feedback mechanism of outflows with a large range in ionization parameter, which is often observed and indicates a non-conical outflow. The kinetic power of the outflow is estimated here to be within 0.4 and 60% of the Eddington luminosity, depending on the ion used to estimate ξ.

  13. Functional Response of Eretmocerus delhiensis on Trialeurodes vaporariorum by Parasitism and Host Feeding

    PubMed Central

    Ebrahimifar, Jafar; Allahyari, Hossein

    2017-01-01

    The parasitoid wasp, Eretmocerus delhiensis (Hymenoptera, Aphelinidae) is a thelytokous and syn-ovigenic parasitoid. To evaluate E. delhiensis as a biocontrol agent in greenhouse, the killing efficiency of this parasitoid by parasitism and host-feeding, were studied. Killing efficiency can be compared by estimation of functional response parameters. Laboratory experiments were performed in controllable conditions to evaluate the functional response of E. delhiensis at eight densities (2, 4, 8, 16, 32, 64, 100, and 120 third nymphal stage) of Trialeurodes vaporariorum (Hemiptera, Aleyrodidae) on two hosts including; tomato and prickly lettuce. The maximum likelihood estimates from regression logistic analysis revealed type II functional response for two host plants and the type of functional response was not affected by host plant. Roger’s model was used to fit the data. The attack rate (a) for E. delhiensis was 0.0286 and 0.0144 per hour on tomato and 0.0434 and 0.0170 per hour on prickly lettuce for parasitism and host feeding, respectively. Furthermore, estimated handling times (Th) were 0.4911 and 1.4453 h on tomato and 0.5713 and 1.5001 h on prickly lettuce for parasitism and host feeding, respectively. Based on 95% confidence interval, functional response parameters were significantly different between the host plants solely in parasitism. Results of this study opens new insight in the host parasitoid interactions, subsequently needs further investigation before utilizing it for management and reduction of greenhouse whitefly. PMID:28423420

  14. Pathognomonic features of Angle's Class II division 2 malocclusion: A comparative cephalometric and arch width study

    PubMed Central

    Prasad, Singamsetty E.R.V.; Indukuri, Ravikishore Reddy; Singh, Rupesh; Nooney, Anitha; Palagiri, Firoz Babu; Narayana, Veera

    2014-01-01

    Background: A thorough knowledge of the salient features of malocclusion helps the clinician in arriving at a proper diagnosis and treatment plan, and also to predict the prognosis, prior to the onset of treatment process. Among the four classes of Angle's classification of malocclusion, Class II division 2 occurs with the least frequency. There is still continuing debate in the literature whether the Class II division 2 patients ascribe the pathognomonic skeletal and dental features. Aim of the study: The aim of this study is to describe the unique features of Angle's Class II division 2 malocclusion to differentiate it from Angle's Class II division 1 malocclusion. Materials and Methods: A total of 582 pre-treatment records (study models and cephalograms), with the age of patients ranging from 15 to 22 years, were obtained from the hospital records of Vishnu Dental College, Bhimavaram and Geetam's Dental College, Visakhapatnam. Out of these, 11 pre-treatment records were excluded because of lack of clarity. In the rest of the sample, 283 were Class II division 1 and 288 were Class II division 2. The lateral cephalograms were analyzed by using digiceph and the arch width analysis was done based on the anatomical points described by Staley et al. and Sergl et al. Results: An intergroup evaluation was done by using unpaired Student's “t” test. The skeletal vertical parameters, dental parameters, and the maxillary arch width parameters revealed a statistically significant difference between the two groups of malocclusion. Conclusion: Angle's Class II division 2 malocclusion has a pronounced horizontal growth pattern with decreased lower anterior facial height, retroclined upper anteriors, and significantly increased maxillary arch width parameters. PMID:25558449

  15. A closed form of a kurtosis parameter of a hypergeometric-Gaussian type-II beam

    NASA Astrophysics Data System (ADS)

    F, Khannous; A, A. A. Ebrahim; A, Belafhal

    2016-04-01

    Based on the irradiance moment definition and the analytical expression of waveform propagation for hypergeometric-Gaussian type-II beams passing through an ABCD system, the kurtosis parameter is derived analytically and illustrated numerically. The kurtosis parameters of the Gaussian beam, modified Bessel modulated Gaussian beam with quadrature radial and elegant Laguerre-Gaussian beams are obtained by treating them as special cases of the present treatment. The obtained results show that the kurtosis parameter depends on the change of the beam order m and the hollowness parameter p, such as its decrease with increasing m and increase with increasing p.

  16. On the biosorption, by brown seaweed, Lobophora variegata, of Ni(II) from aqueous solutions: equilibrium and thermodynamic studies.

    PubMed

    Basha, Shaik; Jaiswar, Santlal; Jha, Bhavanath

    2010-09-01

    The biosorption equilibrium isotherms of Ni(II) onto marine brown algae Lobophora variegata, which was chemically-modified by CaCl(2) were studied and modeled. To predict the biosorption isotherms and to determine the characteristic parameters for process design, twenty-three one-, two-, three-, four- and five-parameter isotherm models were applied to experimental data. The interaction among biosorbed molecules is attractive and biosorption is carried out on energetically different sites and is an endothermic process. The five-parameter Fritz-Schluender model gives the most accurate fit with high regression coefficient, R (2) (0.9911-0.9975) and F-ratio (118.03-179.96), and low standard error, SE (0.0902-0.0.1556) and the residual or sum of square error, SSE (0.0012-0.1789) values to all experimental data in comparison to other models. The biosorption isotherm models fitted the experimental data in the order: Fritz-Schluender (five-parameter) > Freundlich (two-parameter) > Langmuir (two-parameter) > Khan (three-parameter) > Fritz-Schluender (four-parameter). The thermodynamic parameters such as DeltaG (0), DeltaH (0) and DeltaS (0) have been determined, which indicates the sorption of Ni(II) onto L. variegata was spontaneous and endothermic in nature.

  17. Right miniparasternotomy may be a good minimally invasive alternative to full sternotomy for cardiac valve operations: a propensity-adjusted analysis.

    PubMed

    Chiu, Kuan M; Chen, Robert J; Lin, Tzu Y; Chen, Jer S; Huang, Jin H; Huang, Chun Y; Chu, Shu H

    2016-02-01

    Limited real-world data existed for mini-parasternotomy approach with good sample size in Asian cohorts and most previous studies were eclipsed by case heterogeneity. The goal of this study was to compare safety and quality outcomes of cardiac non-coronary valve operations by mini-parasternotomy and full sternotomy approaches on risk-adjusted basis. METHODS From our hospital database, we retrieved the cases of non-coronary valve operations from 1 January 2005 to 31 December 2012, including re-do, emergent, and combined procedures. Estimated EuroScore-II and propensity score for choosing mini-parasternotomy were adjusted for in the regression models on hospital mortality, complications (pneumonia, stroke, sepsis, etc.), and quality parameters (length of stay, ICU time, ventilator time, etc.). Non-complicated cases, defined as survival to discharge, ventilator use not over one week, and intensive care unit stay not over two weeks, were used for quality parameters. There were 283 mini-parasternotomy and 177 full sternotomy cases. EuroScore-II differed significantly (medians 2.1 vs. 4.7, P<0.001). Propensity scores for choosing mini-parasternotomy were higher with lower EuroScore-II (OR=0.91 per 1%, P<0.001), aortic regurgitation (OR=2.3, P=0.005), and aortic non-mitral valve disease (OR=3.9, P<0.001). Adjusted for propensity score and EuroScore-II, mini-parasternotomy group had less pneumonia (OR=0.32, P=0.043), less sepsis (OR=0.31, P=0.045), and shorter non-complicated length of stay (coefficient=-7.2 (day), P<0.001) than full sternotomy group, whereas Kaplan-Meier survival, non-complicated ICU time, non-complicated ventilator time, and 30-day mortality did not differ significantly. The propensity-adjusted analysis demonstrated encouraging safety and quality outcomes for mini-parasternotomy valve operation in carefully selected patients.

  18. Structural, molecular orbital and optical characterizations of solvatochromic mixed ligand copper(II) complex of 5,5-Dimethyl cyclohexanate 1,3-dione and N,N,N',N'N″-pentamethyldiethylenetriamine.

    PubMed

    Taha, A; Farag, A A M; Ammar, A H; Ahmed, H M

    2014-03-25

    In this work, a new solvatochromic mononuclear mixed ligand complex with the formula, Cu(DMCHD)(Me5dien)NO3 (where, DMCHD=5,5-Dimethyl cyclohexanate 1,3-dione and (Me5dien)=N,N,N',N'N″-pentamethyldiethylenetriamine was synthesized and characterized by analytical, spectral, magnetic, molar conductance, thermal gravimetric analysis (TGA), X-ray diffraction (XRD) and transmission electron microscope (TEM) measurements. The formation constant-value for copper (II)-DMCHD was found to be much lower than the expected for similar β-diketones, revealing monobasic unidentate nature of this ligand. The d-d absorption bands of the prepared complex exhibit a color changes in various solvent (solvatochromic). Specific and non-specific interactions of solvent molecules with the complex were investigated using Multi Parametric Linear Regression Analysis (MLRA). Structural parameters of the free ligands and their Cu (II) - complex were calculated on the basis of semi-empirical PM3 level and compared with the experimental data. The crystallite size and morphology of Cu(DMCHD)(Me5dien)NO3 were examined using XRD analysis and TEM, revealing that the complex is well crystalline and correspond to the monoclinic crystal structure. The lattice strain and mean crystallite size were estimated by Williamson-Hall (W-H) plot using X-ray diffraction data. The main important absorption parameters such as extinction molar coefficient, oscillator strength and electric dipole strength of the principal optical transitions in the UV-Vis region were calculated. The analysis of absorption coefficient near the fundamental absorption edge reveals that the optical band gaps are direct allowed transitions with values of 2.78 eV and 3.59 eV. The present copper (II) complex was screened for its antimicrobial activity against Staphylococcus Aureus and Bacillus Subtilis as Gram-positive bacteria, Escherichia Coli and Salmonella Typhimurium as Gram-negative bacteria and Candida Albicans as fungus strain. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Estimates of solar variability using the solar backscatter ultraviolet (SBUV) 2 Mg II index from the NOAA 9 satellite

    NASA Technical Reports Server (NTRS)

    Cebula, Richard P.; Deland, Matthew T.; Schlesinger, Barry M.

    1992-01-01

    The Mg II core to wing index was first developed for the Nimbus 7 solar backscatter ultraviolet (SBUV) instrument as an indicator of solar variability on both solar 27-day rotational and solar cycle time scales. This work extends the Mg II index to the NOAA 9 SBUV 2 instrument and shows that the variations in absolute value between Mg II index data sets caused by interinstrument differences do not affect the ability to track temporal variations. The NOAA 9 Mg II index accurately represents solar rotational modulation but contains more day-to-day noise than the Nimbus 7 Mg II index. Solar variability at other UV wavelengths is estimated by deriving scale factors between the Mg II index rotational variations and at those selected wavelengths. Based on the 27-day average of the NOAA 9 Mg II index and the NOAA 9 scale factors, the solar irradiance change from solar minimum in September 1986 to the beginning of the maximum of solar cycle 22 in 1989 is estimated to be 8.6 percent at 205 nm, 3.5 percent at 250 nm, and less than 1 percent beyond 300 nm.

  20. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability.

    PubMed

    Beda, Alessandro; Simpson, David M; Faes, Luca

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.

  1. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability

    PubMed Central

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394

  2. The effect on esophagus after different radiotherapy techniques for early stage Hodgkin's lymphoma.

    PubMed

    Jørgensen, Anni Y S; Maraldo, Maja V; Brodin, Nils Patrik; Aznar, Marianne C; Vogelius, Ivan R; Rosenschöld, Per Munck Af; Petersen, Peter M; Specht, Lena

    2013-10-01

    The cure rate of early stage Hodgkin's lymphoma (HL) is excellent; investigating the late effects of treatment is thus important. Esophageal toxicity is a known side effect in patients receiving radiotherapy (RT) to the mediastinum, although little is known of this in HL survivors. This study investigates the dose to the esophagus in the treatment of early stage HL using different RT techniques. Estimated risks of early esophagitis, esophageal stricture and cancer are compared between treatments. We included 46 patients ≥ 15 years with supradiaphragmatic, clinical stage I-II HL, who received chemotherapy followed by involved node RT (INRT) to 30.6 Gy at our institution. INRT was planned with three-dimensional conformal RT (3DCRT). For each patient a volumetric modulated arc therapy (VMAT), proton therapy (PT) and mantle field (MF) treatment plan was simulated. Mean, maximum and minimum dose to the esophagus were extracted from the treatment plans. Risk estimates were based on dose-response models from clinical series with long-term follow-up. Statistical analyses were performed with repeated measures ANOVA using Bonferroni corrections. Mean dose to the esophagus was 16.4, 16.4, 14.7 and 34.2 Gy (p < 0.001) with 3DCRT, VMAT, PT and MF treatment, respectively. No differences were seen in the estimated risk of developing esophagitis, stricture or cancer with 3DCRT compared to VMAT (p = 1.000, p = 1.000, p = 0.356). PT performed significantly better with the lowest risk estimates on all parameters compared to the photon treatments, except compared to 3DCRT for stricture (p = 0.066). On all parameters the modern techniques were superior to MF treatment (p < 0.001). The estimated dose to the esophagus and the corresponding estimated risks of esophageal complications are decreased significantly with highly conformal RT compared to MF treatment. The number of patients presenting with late esophageal side effects will, thus, likely be minimal in the future.

  3. Spatial sparsity based indoor localization in wireless sensor network for assistive healthcare.

    PubMed

    Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark

    2012-01-01

    Indoor localization is one of the key topics in the area of wireless networks with increasing applications in assistive healthcare, where tracking the position and actions of the patient or elderly are required for medical observation or accident prevention. Most of the common indoor localization methods are based on estimating one or more location-dependent signal parameters like TOA, AOA or RSS. However, some difficulties and challenges caused by the complex scenarios within a closed space significantly limit the applicability of those existing approaches in an indoor assistive environment, such as the well-known multipath effect. In this paper, we develop a new one-stage localization method based on spatial sparsity of the x-y plane. In this method, we directly estimate the location of the emitter without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation. The results show that the proposed method is (i) very accurate even with a small number of sensors and (ii) very effective in addressing the multi-path issues.

  4. Quantitative analysis of production traits in saltwater crocodiles (Crocodylus porosus): II. age at slaughter.

    PubMed

    Isberg, S R; Thomson, P C; Nicholas, F W; Barker, S G; Moran, C

    2005-12-01

    Crocodile morphometric (head, snout-vent and total length) measurements were recorded at three stages during the production chain: hatching, inventory [average age (+/-SE) is 265.1 +/- 0.4 days] and slaughter (average age is 1037.8 +/- 0.4 days). Crocodile skins are used for the manufacture of exclusive leather products, with the most common-sized skin sold having 35-45 cm in belly width. One of the breeding objectives for inclusion into a multitrait genetic improvement programme for saltwater crocodiles is the time taken for a juvenile to reach this size or age at slaughter. A multivariate restricted maximum likelihood analysis provided (co)variance components for estimating the first published genetic parameter estimates for these traits. Heritability (+/-SE) estimates for the traits hatchling snout-vent length, inventory head length and age at slaughter were 0.60 (0.15), 0.59 (0.12) and 0.40 (0.10) respectively. There were strong negative genetic (-0.81 +/- 0.08) and phenotypic (-0.82 +/- 0.02) correlations between age at slaughter and inventory head length.

  5. Packed-bed column biosorption of chromium(VI) and nickel(II) onto Fenton modified Hydrilla verticillata dried biomass.

    PubMed

    Mishra, Ashutosh; Tripathi, Brahma Dutt; Rai, Ashwani Kumar

    2016-10-01

    The present study represents the first attempt to investigate the biosorption potential of Fenton modified Hydrilla verticillata dried biomass (FMB) in removing chromium(VI) and nickel(II) ions from wastewater using up-flow packed-bed column reactor. Effects of different packed-bed column parameters such as bed height, flow rate, influent metal ion concentration and particle size were examined. The outcome of the column experiments illustrated that highest bed height (25cm); lowest flow rate (10mLmin(-1)), lowest influent metal concentration (5mgL(-1)) and smallest particle size range (0.25-0.50mm) are favourable for biosorption. The maximum biosorption capacity of FMB for chromium(VI) and nickel(II) removal were estimated to be 89.32 and 87.18mgg(-1) respectively. The breakthrough curves were analyzed using Bed Depth Service Time (BDST) and Thomas models. The experimental results obtained agree to both the models. Column regeneration experiments were also carried out using 0.1M HNO3. Results revealed good reusability of FMB during ten cycles of sorption and desorption. Performance of FMB-packed column in treating secondary effluent was also tested under identical experimental conditions. Results demonstrated significant reduction in chromium(VI) and nickel(II) ions concentration after the biosorption process. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  7. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  8. Control system estimation and design for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.

    1972-01-01

    The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.

  9. Rasch Model Parameter Estimation in the Presence of a Nonnormal Latent Trait Using a Nonparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Finch, Holmes; Edwards, Julianne M.

    2016-01-01

    Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…

  10. Towards a library of synthetic galaxy spectra and preliminary results of classification and parametrization of unresolved galaxies for Gaia. II

    NASA Astrophysics Data System (ADS)

    Tsalmantza, P.; Kontizas, M.; Rocca-Volmerange, B.; Bailer-Jones, C. A. L.; Kontizas, E.; Bellas-Velidis, I.; Livanou, E.; Korakitis, R.; Dapergolas, A.; Vallenari, A.; Fioc, M.

    2009-09-01

    Aims: This paper is the second in a series, implementing a classification system for Gaia observations of unresolved galaxies. Our goals are to determine spectral classes and estimate intrinsic astrophysical parameters via synthetic templates. Here we describe (1) a new extended library of synthetic galaxy spectra; (2) its comparison with various observations; and (3) first results of classification and parametrization experiments using simulated Gaia spectrophotometry of this library. Methods: Using the PÉGASE.2 code, based on galaxy evolution models that take account of metallicity evolution, extinction correction, and emission lines (with stellar spectra based on the BaSeL library), we improved our first library and extended it to cover the domain of most of the SDSS catalogue. Our classification and regression models were support vector machines (SVMs). Results: We produce an extended library of 28 885 synthetic galaxy spectra at zero redshift covering four general Hubble types of galaxies, over the wavelength range between 250 and 1050 nm at a sampling of 1 nm or less. The library is also produced for 4 random values of redshift in the range of 0-0.2. It is computed on a random grid of four key astrophysical parameters (infall timescale and 3 parameters defining the SFR) and, depending on the galaxy type, on two values of the age of the galaxy. The synthetic library was compared and found to be in good agreement with various observations. The first results from the SVM classifiers and parametrizers are promising, indicating that Hubble types can be reliably predicted and several parameters estimated with low bias and variance.

  11. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less

  12. HICOSMO: cosmology with a complete sample of galaxy clusters - II. Cosmological results

    NASA Astrophysics Data System (ADS)

    Schellenberger, G.; Reiprich, T. H.

    2017-10-01

    The X-ray bright, hot gas in the potential well of a galaxy cluster enables systematic X-ray studies of samples of galaxy clusters to constrain cosmological parameters. HIFLUGCS consists of the 64 X-ray brightest galaxy clusters in the Universe, building up a local sample. Here, we utilize this sample to determine, for the first time, individual hydrostatic mass estimates for all the clusters of the sample and, by making use of the completeness of the sample, we quantify constraints on the two interesting cosmological parameters, Ωm and σ8. We apply our total hydrostatic and gas mass estimates from the X-ray analysis to a Bayesian cosmological likelihood analysis and leave several parameters free to be constrained. We find Ωm = 0.30 ± 0.01 and σ8 = 0.79 ± 0.03 (statistical uncertainties, 68 per cent credibility level) using our default analysis strategy combining both a mass function analysis and the gas mass fraction results. The main sources of biases that we correct here are (1) the influence of galaxy groups (incompleteness in parent samples and differing behaviour of the Lx-M relation), (2) the hydrostatic mass bias, (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other physical effects (non-negligible neutrino mass). We find that galaxy groups introduce a strong bias, since their number density seems to be over predicted by the halo mass function. On the other hand, incorporating baryonic effects does not result in a significant change in the constraints. The total (uncorrected) systematic uncertainties (∼20 per cent) clearly dominate the statistical uncertainties on cosmological parameters for our sample.

  13. Bayesian LASSO, scale space and decision making in association genetics.

    PubMed

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  14. Stark broadening parameters and transition probabilities of persistent lines of Tl II

    NASA Astrophysics Data System (ADS)

    de Andrés-García, I.; Colón, C.; Fernández-Martínez, F.

    2018-05-01

    The presence of singly ionized thallium in the stellar atmosphere of the chemically peculiar star χ Lupi was reported by Leckrone et al. in 1999 by analysis of its stellar spectrum obtained with the Goddard High Resolution Spectrograph (GHRS) on board the Hubble Space Telescope. Atomic data about the spectral line of 1307.50 Å and about the hyperfine components of the spectral lines of 1321.71 Å and 1908.64 Å were taken from different sources and used to analyse the isotopic abundance of thallium II in the star χ Lupi. From their results the authors concluded that the photosphere of the star presents an anomalous isotopic composition of Tl II. A study of the atomic parameters of Tl II and of the broadening by the Stark effect of its spectral lines (and therefore of the possible overlaps of these lines) can help to clarify the conclusions about the spectral abundance of Tl II in different stars. In this paper we present calculated values of the atomic transition probabilities and Stark broadening parameters for 49 spectral lines of Tl II obtained by using the Cowan code including core polarization effects and the Griem semiempirical approach. Theoretical values of radiative lifetimes for 11 levels (eight with experimental values in the bibliography) are calculated and compared with the experimental values in order to test the quality of our results. Theoretical trends of the Stark width and shift parameters versus the temperature for spectral lines of astrophysical interest are displayed. Trends of our calculated Stark width for the isoelectronic sequence Tl II-Pb III-Bi IV are also displayed.

  15. 10 CFR Appendix II to Part 504 - Fuel Price Computation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Fuel Price Computation II Appendix II to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. II Appendix II to Part 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters...

  16. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  17. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    PubMed

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.

  18. Estimation of soil saturated hydraulic conductivity by artificial neural networks ensemble in smectitic soils

    NASA Astrophysics Data System (ADS)

    Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.

    2016-03-01

    The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .

  19. Volcanic Signatures in Estimates of Stratospheric Aerosol Size, Distribution Width, Surface Area, and Volume Deduced from Global Satellite-Based Observations

    NASA Technical Reports Server (NTRS)

    Bauman, J. J.; Russell, P. B.

    2000-01-01

    Volcanic signatures in the stratospheric aerosol layer are revealed by two independent techniques which retrieve aerosol information from global satellite-based observations of particulate extinction. Both techniques combine the 4-wavelength Stratospheric Aerosol and Gas Experiment (SAGE) II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument. The algorithms use the SAGE II/CLAES composite extinction spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub R). The first technique is a multi-wavelength Look-Up-Table (LUT) algorithm which retrieves values and uncertainties of R(sub eff) by comparing ratios of extinctions from SAGE II and CLAES (e.g., E(sub lambda)/E(sub 1.02) to pre-computed extinction ratios which are based on a range of unimodal lognormal size distributions. The pre-computed ratios are presented as a function of R(sub eff) for a given sigma(sub g); thus the comparisons establish the range of R(sub eff) consistent with the measured spectra for that sigma(sub g). The fact that no solutions are found for certain sigma(sub g) values provides information on the acceptable range of sigma(sub g), which is found to evolve in response to volcanic injections and removal periods. Analogous comparisons using absolute extinction spectra and error bars establish the range of S and V. The second technique is a Parameter Search Technique (PST) which estimates R(sub eff) and sigma(sub g) within a month-latitude-altitude bin by minimizing the chi-squared values obtained by comparing the SAGE II/CLAES extinction spectra and error bars with spectra calculated by varying the lognormal fitting parameters: R(sub eff), sigma(sub g), and the total number of particles N(sub 0). For both techniques, possible biases in retrieved-parameters caused by assuming a unimodal functional form are removed using correction factors computed from representative in situ measurements of bimodal size distributions. Some interesting features revealed by the LUT and PST retrievals include: (1) Increases in S and V (but not R(sub eff)) after the Ruiz and Kelut injections, (2) Increases in S, V, R(sub eff) after Pinatubo, (3) Post-Pinatubo increases in S, V, and R(sub eff) that are more rapid in the tropics than elsewhere, (4) Mid-latitude post-Pinatubo increases in R(sub eff) that lag increases in S and V, (5) S and V returning to pre-Pinatubo values sooner than R(sub eff) does, (6) Sharp increases in sigma(sub g), after Pinatubo and slight increases in sigma(sub g) after Ruiz, Etna, Kelut, Spurr and Rabaul, and (7) Gradual declines in the heights at which R(sub eff), S and V peak after Pinatubo.

  20. Adaptive Modal Identification for Flutter Suppression Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.

    2016-01-01

    In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.

  1. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  2. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  3. Comparative absorption, electroabsorption and electrochemical studies of intervalence electron transfer and electronic coupling in cyanide-bridged bimetallic systems: ancillary ligand effects

    NASA Astrophysics Data System (ADS)

    Vance, Fredrick W.; Slone, Robert V.; Stern, Charlotte L.; Hupp, Joseph T.

    2000-03-01

    Electroabsorption or Stark spectroscopy has been used to evaluate the systems (NC) 5M II-CN-Ru III(NH 3) 51- and (NC) 5M II-CN-Ru III(NH 3) 4py 1-, where M II=Fe II or Ru II. When a pyridine ligand is present in the axial position on the Ru III acceptor, the effective optical electron transfer distance - as measured by the change in dipole moment, |Δ μ| - is increased by more than 35% relative to the ammine substituted counterpart. Comparison of the charge transfer distances to the crystal structure of Na[(CN) 5Fe-CN-Ru(NH 3) 4py] · 6H 2O reveals that the Stark derived distances are ˜50% to ˜90% of the geometric separation of the metal centers. The differences result in an upward revision in the Hush delocalization parameter, c b2, and of the electronic coupling matrix element, H ab, relative to those parameters obtained exclusively from electronic absorption measurements. The revised parameters are compared to those, which are obtained via electrochemical techniques and found to be in only fair agreement. We conclude that the absorption/electroabsorption analysis likely yields a more reliable set of mixing and coupling parameters.

  4. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.

  5. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  6. Population subdivision and molecular sequence variation: theory and analysis of Drosophila ananassae data.

    PubMed

    Vogl, Claus; Das, Aparup; Beaumont, Mark; Mohanty, Sujata; Stephan, Wolfgang

    2003-11-01

    Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.

  7. Self-motion magnitude estimation during linear oscillation - Changes with head orientation and following fatigue

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Wood, D. L.; Gulledge, W. L.; Goodrich, R. L.

    1979-01-01

    Two types of experiments concerning the estimated magnitude of self-motion during exposure to linear oscillation on a parallel swing are described in this paper. Experiment I examined changes in magnitude estimation as a function of variation of the subject's head orientation, and Experiments II a, II b, and II c assessed changes in magnitude estimation performance following exposure to sustained, 'intense' linear oscillation (fatigue-inducting stimulation). The subjects' performance was summarized employing Stevens' power law R = k x S to the nth, where R is perceived self-motion magnitude, k is a constant, S is amplitude of linear oscillation, and n is an exponent). The results of Experiment I indicated that the exponents, n, for the magnitude estimation functions varied with head orientation and were greatest when the head was oriented 135 deg off the vertical. In Experiments II a-c, the magnitude estimation function exponents were increased following fatigue. Both types of experiments suggest ways in which the vestibular system's contribution to a spatial orientation perceptual system may vary. This variability may be a contributing factor to the development of pilot/astronaut disorientation and may also be implicated in the occurrence of motion sickness.

  8. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  9. A Comparative Study of Distribution System Parameter Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less

  10. Relative Leukocyte Telomere Length, Hematological Parameters and Anemia - Data from the Berlin Aging Study II (BASE-II).

    PubMed

    Meyer, Antje; Salewsky, Bastian; Buchmann, Nikolaus; Steinhagen-Thiessen, Elisabeth; Demuth, Ilja

    2016-01-01

    The length of the chromosome ends, telomeres, is widely accepted as a biomarker of aging. However, the dynamic of the relationship between telomere length and hematopoietic parameters in the normal aging process, which is of particular interest with respect to age-related anemia, is not well understood. We have analyzed the relationship between relative leukocyte telomere length (rLTL) and several hematological parameters in the older group of the Berlin Aging Study II (BASE-II) participants. This paper also compares rLTL between both BASE-II age groups (22-37 and 60-83 years). Genomic DNA was extracted from peripheral blood leukocytes of BASE-II participants and used to determine rLTL by a quantitative PCR protocol. Standard methods were used to determine blood parameters, and the WHO criteria were used to identify anemic participants. Telomere length data were available for 444 younger participants (28.4 ± 3.1 years old; 52% women) and 1,460 older participants (68.2 ± 3.7 years old; 49.4% women). rLTL was significantly shorter in BASE-II participants of the older group (p = 3.7 × 10-12) and in women (p = 4.2 × 10-31). rLTL of older men exhibited a statistically significant, positive partial correlation with mean corpuscular hemoglobin (MCH; p = 0.012) and MCH concentration (p = 0.002). While these correlations were only observed in men, the rLTL of older women was negatively correlated with the number of thrombocytes (p = 0.015) in the same type of analysis. Among all older participants, 6% met the criteria to be categorized as 'anemic'; however, there was no association between anemia and rLTL. In the present study, we have detected isolated correlations between rLTL and hematological parameters; however, in all cases, rLTL explained only a small part of the variation of the analyzed parameters. In disagreement with some other studies showing similar data, we interpret the association between rLTL and some of the hematological parameters studied here to be at most marginal. This applies also to the role of rLTL in anemia, at least in the age group investigated here. Since BASE-II is yet another large cohort in which women have on average shorter telomeres than men, this finding will be addressed in the discussion with respect to the ongoing debate on gender differences in telomere length. © 2016 S. Karger AG, Basel.

  11. Elemental gas-phase abundances of intermediate redshift type Ia supernova star-forming host galaxies

    NASA Astrophysics Data System (ADS)

    Moreno-Raya, M. E.; Galbany, L.; López-Sánchez, Á. R.; Mollá, M.; González-Gaitán, S.; Vílchez, J. M.; Carnero, A.

    2018-05-01

    The maximum luminosity of type Ia supernovae (SNe Ia) depends on the oxygen abundance of the regions of the host galaxies, where they explode. This metallicity dependence reduces the dispersion in the Hubble diagram (HD) when included with the traditional two-parameter calibration of SN Ia light-curve parameters and absolute magnitude. In this work, we use empirical calibrations to carefully estimate the oxygen abundance of galaxies hosting SNe Ia from the SDSS-II/SN (Sloan Digital Sky Survey-II Supernova) survey at intermediate redshift by measuring their emission-line intensities. We also derive electronic temperature with the direct method for a small fraction of objects for consistency. We find a trend of decreasing oxygen abundance with increasing redshift for the most massive galaxies. Moreover, we study the dependence of the HD residuals (HR) with galaxy oxygen abundance obtaining a correlation in line with those found in other works. In particular, the HR versus oxygen abundance shows a slope of -0.186 ± 0.123 mag dex-1 (1.52σ) in good agreement with theoretical expectations. This implies smaller distance modulii after corrections for SNe Ia in metal-rich galaxies. Based on our previous results on local SNe Ia, we propose this dependence to be due to the lower luminosity of the SNe Ia produced in more metal-rich environments.

  12. Role of salivary matrix metalloproteinase-8 (MMP-8) in chronic periodontitis diagnosis.

    PubMed

    Gupta, Namita; Gupta, N D; Gupta, Akash; Khan, Saif; Bansal, Neha

    2015-03-01

    Periodontitis is an inflammatory disease of the periodontium. Any imbalance between the matrix metalloproteinases (MMPs) secreted by neutrophils and tissue inhibitors initiates the destruction of collagen in gum tissue, leading to chronic periodontitis. This study aimed to correlate salivary levels of MMP-8 and periodontal parameters of chronic periodontitis to establish MMP-8 as a noninvasive marker for the early diagnosis of chronic periodontitis. The study involved 40 subjects visiting the periodontic OPD of Dr. Ziauddin Ahmad Dental College and Hospital, located in Aligarh, U.P., India, from 2011 to 2012. The subjects were divided into two groups: group I consisted of 20 periodontally healthy subjects (controls) while group II consisted of 20 patients with chronic periodontitis. Chronic periodontitis was assessed on the basis of several periodontal parameters, including pocket probing depth (PPD), clinical attachment level (CAL), gingival index (GI), and plaque index (PI). Around 3ml of unstimulated and whole expectorated saliva was collected for MMP-8 estimation by ELISA using Quantikine human total MMP-8 immunoassay kits. Data were analyzed using STATISTICA (Windows version 6) software. Salivary MMP-8 levels of groups I and II were 190.91 ± 143.89 ng/ml and 348.26 ± 202.1 ng/ml, respectively. The MMP-8 levels and periodontal status (PPD, CAL, GI, and PI) of groups I and II showed positive and significant correlations (for PPD, r = 0.63, P < 0.001; for CAL, r = 0.54, P < 0.001; for GI, r = 0.49, P < 0.001; and for PI, r = 0.63, P < 0.001). The results of this study demonstrate elevated concentrations of MMP-8 in individuals with chronic periodontitis.

  13. Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates

    NASA Astrophysics Data System (ADS)

    Hasenauer, H.; Pietsch, S. A.; Petritsch, R.

    2007-05-01

    Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.

  14. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  15. A Fresh Start for Flood Estimation in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for flood estimation! A shift to new methods for flood estimation will not be taken lightly by practitioners. However, the standard for change is clear - can we develop new methods which give significant improvements in reliability over those existing methods which are demonstrably unsatisfactory?

  16. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.

  17. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    PubMed Central

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850

  18. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  19. The Konus-Wind Catalog of Gamma-Ray Bursts with Known Redshifts. I. Bursts Detected in the Triggered Mode

    NASA Astrophysics Data System (ADS)

    Tsvetkova, A.; Frederiks, D.; Golenetskii, S.; Lysenko, A.; Oleynik, P.; Pal'shin, V.; Svinkin, D.; Ulanov, M.; Cline, T.; Hurley, K.; Aptekar, R.

    2017-12-01

    In this catalog, we present the results of a systematic study of gamma-ray bursts (GRBs) with reliable redshift estimates detected in the triggered mode of the Konus-Wind (KW) experiment during the period from 1997 February to 2016 June. The sample consists of 150 GRBs (including 12 short/hard bursts) and represents the largest set of cosmological GRBs studied to date over a broad energy band. From the temporal and spectral analyses of the sample, we provide the burst durations, the spectral lags, the results of spectral fits with two model functions, the total energy fluences, and the peak energy fluxes. Based on the GRB redshifts, which span the range 0.1≤slant z≤slant 5, we estimate the rest-frame, isotropic-equivalent energy, and peak luminosity. For 32 GRBs with reasonably constrained jet breaks, we provide the collimation-corrected values of the energetics. We consider the behavior of the rest-frame GRB parameters in the hardness-duration and hardness-intensity planes, and confirm the “Amati” and “Yonetoku” relations for Type II GRBs. The correction for the jet collimation does not improve these correlations for the KW sample. We discuss the influence of instrumental selection effects on the GRB parameter distributions and estimate the KW GRB detection horizon, which extends to z˜ 16.6, stressing the importance of GRBs as probes of the early universe. Accounting for the instrumental bias, we estimate the KW GRB luminosity evolution, luminosity and isotropic-energy functions, and the evolution of the GRB formation rate, which are in general agreement with those obtained in previous studies.

  20. Approximating Smooth Step Functions Using Partial Fourier Series Sums

    DTIC Science & Technology

    2012-09-01

    interp1(xt(ii), smoothstepbez( t(ii), min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); ii = find( abs(t - tau/2) <= epi ); iii = t(ii...interp1( xt(ii), smoothstepbez( rt, min(rt), max(rt), ’y’), t(ii), ’linear’, ’ extrap ’ ); % stepm(ii) = 1 - interp1(xt(ii), smoothstepbez( t(ii...min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); In this case, because x is also defined as a function of the independent parameter

  1. Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.

  2. Controlling directed transport of matter-wave solitons using the ratchet effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, M.; Carretero-Gonzalez, R.; Chacon, R.

    2011-05-15

    We demonstrate that directed transport of bright solitons formed in a quasi-one-dimensional Bose-Einstein condensate can be reliably controlled by tailoring a weak optical lattice potential, biharmonic in both space and time, in accordance with the degree of symmetry breaking mechanism. By considering the regime where matter-wave solitons are narrow compared to the lattice period, (i) we propose an analytical estimate for the dependence of the directed soliton current on the biharmonic potential parameters that is in good agreement with numerical experiments, and (ii) we show that the dependence of the directed soliton current on the number of atoms is amore » consequence of the ratchet universality.« less

  3. A computer program (MODFLOWP) for estimating parameters of a transient, three-dimensional ground-water flow model using nonlinear regression

    USGS Publications Warehouse

    Hill, Mary Catherine

    1992-01-01

    This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.

  4. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  5. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  6. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  7. Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath

    NASA Astrophysics Data System (ADS)

    Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping

    2018-04-01

    Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.

  8. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  9. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  10. An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho

    Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…

  11. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  12. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  13. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    PubMed

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  14. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  15. Kr photoionized plasma induced by intense extreme ultraviolet pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartnik, A., E-mail: andrzej.bartnik@wat.edu.pl; Wachulak, P.; Fiedorowicz, H.

    Irradiation of any gas with an intense EUV (extreme ultraviolet) radiation beam can result in creation of photoionized plasmas. The parameters of such plasmas can be significantly different when compared with those of the laser produced plasmas (LPP) or discharge plasmas. In this work, the photoionized plasmas were created in a krypton gas irradiated using an LPP EUV source operating at a 10 Hz repetition rate. The Kr gas was injected into the vacuum chamber synchronously with the EUV radiation pulses. The EUV beam was focused onto a Kr gas stream using an axisymmetrical ellipsoidal collector. The resulting low temperature Krmore » plasmas emitted electromagnetic radiation in the wide spectral range. The emission spectra were measured either in the EUV or an optical range. The EUV spectrum was dominated by emission lines originating from Kr III and Kr IV ions, and the UV/VIS spectra were composed from Kr II and Kr I lines. The spectral lines recorded in EUV, UV, and VIS ranges were used for the construction of Boltzmann plots to be used for the estimation of the electron temperature. It was shown that for the lowest Kr III and Kr IV levels, the local thermodynamic equilibrium (LTE) conditions were not fulfilled. The electron temperature was thus estimated based on Kr II and Kr I species where the partial LTE conditions could be expected.« less

  16. Kr photoionized plasma induced by intense extreme ultraviolet pulses

    NASA Astrophysics Data System (ADS)

    Bartnik, A.; Wachulak, P.; Fiedorowicz, H.; Skrzeczanowski, W.

    2016-04-01

    Irradiation of any gas with an intense EUV (extreme ultraviolet) radiation beam can result in creation of photoionized plasmas. The parameters of such plasmas can be significantly different when compared with those of the laser produced plasmas (LPP) or discharge plasmas. In this work, the photoionized plasmas were created in a krypton gas irradiated using an LPP EUV source operating at a 10 Hz repetition rate. The Kr gas was injected into the vacuum chamber synchronously with the EUV radiation pulses. The EUV beam was focused onto a Kr gas stream using an axisymmetrical ellipsoidal collector. The resulting low temperature Kr plasmas emitted electromagnetic radiation in the wide spectral range. The emission spectra were measured either in the EUV or an optical range. The EUV spectrum was dominated by emission lines originating from Kr III and Kr IV ions, and the UV/VIS spectra were composed from Kr II and Kr I lines. The spectral lines recorded in EUV, UV, and VIS ranges were used for the construction of Boltzmann plots to be used for the estimation of the electron temperature. It was shown that for the lowest Kr III and Kr IV levels, the local thermodynamic equilibrium (LTE) conditions were not fulfilled. The electron temperature was thus estimated based on Kr II and Kr I species where the partial LTE conditions could be expected.

  17. Comparison of stability and control parameters for a light, single-engine, high-winged aircraft using different flight test and parameter estimation techniques

    NASA Technical Reports Server (NTRS)

    Suit, W. T.; Cannaday, R. L.

    1979-01-01

    The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.

  18. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  19. A z-Vertex Trigger for Belle II

    NASA Astrophysics Data System (ADS)

    Skambraks, S.; Abudinén, F.; Chen, Y.; Feindt, M.; Frühwirth, R.; Heck, M.; Kiesling, C.; Knoll, A.; Neuhaus, S.; Paul, S.; Schieck, J.

    2015-08-01

    The Belle II experiment will go into operation at the upgraded SuperKEKB collider in 2016. SuperKEKB is designed to deliver an instantaneous luminosity L = 8 ×1035 cm - 2 s - 1. The experiment will therefore have to cope with a much larger machine background than its predecessor Belle, in particular from events outside of the interaction region. We present the concept of a track trigger, based on a neural network approach, that is able to suppress a large fraction of this background by reconstructing the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger uses the hit information from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (“sectors”), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track. Within the sector, the z-vertex is estimated by a specialized neural network, with the drift times from the CDC as input and a continuous output corresponding to the scaled z-vertex. The neural algorithm will be implemented in programmable hardware. To this end a Virtex 7 FPGA board will be used, which provides at present the most promising solution for a fully parallelized implementation of neural networks or alternative multivariate methods. A high speed interface for external memory will be integrated into the platform, to be able to store the O(109) parameters required. The contribution presents the results of our feasibility studies and discusses the details of the envisaged hardware solution.

  20. ALMA Imaging and Gravitational Lens Models of South Pole Telescope—Selected Dusty, Star-Forming Galaxies at High Redshifts

    NASA Astrophysics Data System (ADS)

    Spilker, J. S.; Marrone, D. P.; Aravena, M.; Béthermin, M.; Bothwell, M. S.; Carlstrom, J. E.; Chapman, S. C.; Crawford, T. M.; de Breuck, C.; Fassnacht, C. D.; Gonzalez, A. H.; Greve, T. R.; Hezaveh, Y.; Litke, K.; Ma, J.; Malkan, M.; Rotermund, K. M.; Strandet, M.; Vieira, J. D.; Weiss, A.; Welikala, N.

    2016-08-01

    The South Pole Telescope has discovered 100 gravitationally lensed, high-redshift, dusty, star-forming galaxies (DSFGs). We present 0.″5 resolution 870 μ {{m}} Atacama Large Millimeter/submillimeter Array imaging of a sample of 47 DSFGs spanning z=1.9{--}5.7, and construct gravitational lens models of these sources. Our visibility-based lens modeling incorporates several sources of residual interferometric calibration uncertainty, allowing us to properly account for noise in the observations. At least 70% of the sources are strongly lensed by foreground galaxies ({μ }870μ {{m}}\\gt 2), with a median magnification of {μ }870μ {{m}}=6.3, extending to {μ }870μ {{m}}\\gt 30. We compare the intrinsic size distribution of the strongly lensed sources to a similar number of unlensed DSFGs and find no significant differences in spite of a bias between the magnification and intrinsic source size. This may indicate that the true size distribution of DSFGs is relatively narrow. We use the source sizes to constrain the wavelength at which the dust optical depth is unity and find this wavelength to be correlated with the dust temperature. This correlation leads to discrepancies in dust mass estimates of a factor of two compared to estimates using a single value for this wavelength. We investigate the relationship between the [C II] line and the far-infrared luminosity and find that the same correlation between the [C II]/{L}{{FIR}} ratio and {{{Σ }}}{{FIR}} found for low-redshift star-forming galaxies applies to high-redshift galaxies and extends at least two orders of magnitude higher in {{{Σ }}}{{FIR}}. This lends further credence to the claim that the compactness of the IR-emitting region is the controlling parameter in establishing the “[C II] deficit.”

  1. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  2. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  3. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  4. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  5. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  6. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  7. Iron and manganese removal: Recent advances in modelling treatment efficiency by rapid sand filtration.

    PubMed

    Vries, D; Bertelkamp, C; Schoonenberg Kegel, F; Hofs, B; Dusseldorp, J; Bruins, J H; de Vet, W; van den Akker, B

    2017-02-01

    A model has been developed that takes into account the main characteristics of (submerged) rapid filtration: the water quality parameters of the influent water, notably pH, iron(II) and manganese(II) concentrations, homogeneous oxidation in the supernatant layer, surface sorption and heterogeneous oxidation kinetics in the filter, and filter media adsorption characteristics. Simplifying assumptions are made to enable validation in practice, while maintaining the main mechanisms involved in iron(II) and manganese(II) removal. Adsorption isotherm data collected from different Dutch treatment sites show that Fe(II)/Mn(II) adsorption may vary substantially between them, but generally increases with higher pH. The model is sensitive to (experimentally) determined adsorption parameters and the heterogeneous oxidation rate. Model results coincide with experimental values when the heterogeneous rate constants are calibrated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  9. Youth Attitude Tracking Study II Wave 17 -- Fall 1986.

    DTIC Science & Technology

    1987-06-01

    decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures

  10. A dye-binding assay for measurement of the binding of Cu(II) to proteins.

    PubMed

    Wilkinson-White, Lorna E; Easterbrook-Smith, Simon B

    2008-10-01

    We analysed the theory of the coupled equilibria between a metal ion, a metal ion-binding dye and a metal ion-binding protein in order to develop a procedure for estimating the apparent affinity constant of a metal ion:protein complex. This can be done by analysing from measurements of the change in the concentration of the metal ion:dye complex with variation in the concentration of either the metal ion or the protein. Using experimentally determined values for the affinity constant of Cu(II) for the dye, 2-(5-bromo-2-pyridylaxo)-5-(N-propyl-N-sulfopropylamino) aniline (5-Br-PSAA), this procedure was used to estimate the apparent affinity constants for formation of Cu(II):transthyretin, yielding values which were in agreement with literature values. An apparent affinity constant for Cu(II) binding to alpha-synuclein of approximately 1 x 10(9)M(-1) was obtained from measurements of tyrosine fluorescence quenching by Cu(II). This value was in good agreement with that obtained using 5-Br-PSAA. Our analysis and data therefore show that measurement of changes in the equilibria between Cu(II) and 5-Br-PSAA by Cu(II)-binding proteins provides a general procedure for estimating the affinities of proteins for Cu(II).

  11. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  12. Radionuclide transfer in marine coastal ecosystems, a modelling study using metabolic processes and site data.

    PubMed

    Konovalenko, L; Bradshaw, C; Kumblad, L; Kautsky, U

    2014-07-01

    This study implements new site-specific data and improved process-based transport model for 26 elements (Ac, Ag, Am, Ca, Cl, Cm, Cs, Ho, I, Nb, Ni, Np, Pa, Pb, Pd, Po, Pu, Ra, Se, Sm, Sn, Sr, Tc, Th, U, Zr), and validates model predictions with site measurements and literature data. The model was applied in the safety assessment of a planned nuclear waste repository in Forsmark, Öregrundsgrepen (Baltic Sea). Radionuclide transport models are central in radiological risk assessments to predict radionuclide concentrations in biota and doses to humans. Usually concentration ratios (CRs), the ratio of the measured radionuclide concentration in an organism to the concentration in water, drive such models. However, CRs vary with space and time and CR estimates for many organisms are lacking. In the model used in this study, radionuclides were assumed to follow the circulation of organic matter in the ecosystem and regulated by radionuclide-specific mechanisms and metabolic rates of the organisms. Most input parameters were represented by log-normally distributed probability density functions (PDFs) to account for parameter uncertainty. Generally, modelled CRs for grazers, benthos, zooplankton and fish for the 26 elements were in good agreement with site-specific measurements. The uncertainty was reduced when the model was parameterized with site data, and modelled CRs were most similar to measured values for particle reactive elements and for primary consumers. This study clearly demonstrated that it is necessary to validate models with more than just a few elements (e.g. Cs, Sr) in order to make them robust. The use of PDFs as input parameters, rather than averages or best estimates, enabled the estimation of the probable range of modelled CR values for the organism groups, an improvement over models that only estimate means. Using a mechanistic model that is constrained by ecological processes enables (i) the evaluation of the relative importance of food and water uptake pathways and processes such as assimilation and excretion, (ii) the possibility to extrapolate within element groups (a common requirement in many risk assessments when initial model parameters are scarce) and (iii) predictions of radionuclide uptake in the ecosystem after changes in ecosystem structure or environmental conditions. These features are important for the longterm (>1000 year) risk assessments that need to be considered for a deep nuclear waste repository. Copyright © 2013. Published by Elsevier Ltd.

  13. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.

  14. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  15. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  16. Transit Project Planning Guidance : Estimation of Transit Supply Parameters

    DOT National Transportation Integrated Search

    1984-04-01

    This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...

  17. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  18. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.

    PubMed

    Zi, Zhike; Klipp, Edda

    2006-11-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.

  19. Can high pressure I-II transitions in semiconductors be affected by plastic flow and nanocrystal precipitation in phase I?

    NASA Astrophysics Data System (ADS)

    Weinstein, B. A.; Lindberg, G. P.

    Pressure-Raman spectroscopy in ZnSe and ZnTe single crystals reveals that Se and Te nano-crystals (NCs) precipitate in these II-VI hosts for pressures far below their I-II phase transitions. The inclusions are evident from the appearance and negative pressure-shift of the A1 Raman peaks of Se and Te (trigonal phase). The Se and Te NCs nucleate at dislocations and grain boundaries that arise from pressure-induced plastic flow. This produces chemical and structural inhomogeneities in the zincblende phase of the host. At substantially higher pressures, the I-II transition proceeds in the presence of these inhomogenities. This can affect the transition's onset pressure Pt and width ΔPt, and the occurrence of metastable phases along the transition path. Precipitation models in metals show that nucleation of inclusions depends on the Peierls stress τp and a parameter α related to the net free energy gained on nucleation. For favorable values of τp and α, NC precipitation at pressures below the I-II transition could occur in other compounds. We propose criteria to judge whether this is likely based on the observed ranges of τp in the hosts, and estimates of α derived from the cohesive energy densities of the NC materials. One finds trends that can serve as a useful guide, both to test the proposed criteria, and to decide when closer scrutiny of phase transition experiments is warranted, e.g., in powders where high dislocation densities are initially created

  20. Synthesis and spectral studies of platinum metal complexes of benzoin thiosemicarbazone

    NASA Astrophysics Data System (ADS)

    Offiong, Offiong E.

    1994-11-01

    The platinum metal chelates of benzoin thiosemicarbazone obtained with Ru(III), Rh(III), Ir(III), Pd(II) and Pt(II) were prepared from their corresponding halide salts. The complexes were characterized by elemental analysis, conductance measurement, IR, Raman, 1H-NMR, 13C-NMR and UV-visible spectra studies. Various ligand field parameters and nephelauxetic parameters were also calculated. The mode of bonding and the geometry of the ligand environment around the metal ion have been discussed in the light of the available data obtained. Complexes of Ru(III), Rh(III) and Ir(III) are six-coordinate octahedral, while Pd(II) and Pt(II) halide complexes are four-coordinated with halides bridging.

  1. The complementary relationship (CR) approach aids evapotranspiration estimation in the data scarce region of Tibetan Plateau: symmetric and asymmetric perspectives

    NASA Astrophysics Data System (ADS)

    Ma, N.; Zhang, Y.; Szilagyi, J.; Xu, C. Y.

    2015-12-01

    While the land surface latent and sensible heat release in the Tibetan Plateau (TP) could greatly influence the Asian monsoon circulation, the actual evapotranspiration (ETa) information in the TP has been largely hindered by its extremely sparse ground observation network. Thus the complementary relationship (CR) theory lends great potential in estimating the ETa since it relies on solely routine meteorological observations. With the in-situ energy/water flux observation over the highest semiarid alpine steppe in the TP, the modifications of specific components within the CR were first implemented. We found that the symmetry of the CR could be achieved for dry regions of TP when (i) the Priestley-Taylor coefficient, (ii) the slope of the saturation vapor pressure curve and (iii) the wind function were locally calibrated by using the ETa observations in wet days, an estimate of the wet surface temperature and the Monin-Obukhov Similarity (MOS) theory, respectively. In this way, the error of the simulated ETa by the symmetric AA model could be decreased to a large extent. Besides, the asymmetric CR was confirmed in TP when the D20 above-ground and/or E601B sunken pan evaporation (Epan) were used as a proxy of the ETp. Thus daily ETa could also be estimated by coupling D20 above-ground and/or E601B sunken pans through CR. Additionally, to overcome the modification of the specific components in the CR, we also evaluated the Nonlinear-CR model and the Morton's CRAE model. The former does not need the pre-determination of the asymmetry of CR, while the latter does not require the wind speed data as input. We found that both models are also able to simulate the daily ETa well provided their parameter values have been locally calibrated. The sensitivity analysis shows that, if the measured ETa data are absence to calibrate the models' parameter values, the Nonlinear-CR model may be a particularly good way for estimating ETabecause of its mild sensitivity to the parameter values making possible to employ published parameter values derived under similar climatic and land cover conditions. The CRAE model should also be highlighted in the TP since the special topography make the wind speed data suffer large uncertainties when the advanced geo-statistical method was used to spatially interpolate the point-based meteorological records.

  2. Economics of immunization information systems in the United States: assessing costs and efficiency.

    PubMed

    Bartlett, Diana L; Molinari, Noelle-Angelique M; Ortega-Sanchez, Ismael R; Urquhart, Gary A

    2006-08-22

    One of the United States' national health objectives for 2010 is that 95% of children aged <6 years participate in fully operational population-based immunization information systems (IIS). Despite important progress, child participation in most IIS has increased slowly, in part due to limited economic knowledge about IIS operations. Should IIS need further improvement, characterizing costs and identifying factors that affect IIS efficiency become crucial. Data were collected from a national sampling frame of the 56 states/cities that received federal immunization grants under U.S. Public Health Service Act 317b and completed the federal 1999 Immunization Registry Annual Report. The sampling frame was stratified by IIS functional status, children's enrollment in the IIS, and whether the IIS had been developed as an independent system or was integrated into a larger system. These sites self-reported IIS developmental and operational program costs for calendar years 1998-2002 using a standardized data collection tool and underwent on-site interviews to verify reported data with information from the state/city financial management system and other financial records. A parametric cost-per-patient-record (CPR) model was estimated. The model assessed the impact of labor and non-labor resources used in development and operations tasks, as well as the impact of information technology, local providers' participation and compliance with federal IIS performance standards (e.g., ensuring the confidentiality and security of information, ensure timely vaccination data at the time of patient encounter, and produce official immunization records). Given the number of records minimizing CPR, the additional amount of resources needed to meet national health goals for the year 2010 was also calculated. Estimated CPR was as high as $10.30 and as low as $0.09 in operating IIS. About 20% of IIS had between 2.9 to 3.2 million records and showed CPR estimates of $0.09. Overall, CPR was highly sensitive to local providers' participation. To achieve the 2010 goals, additional aggregated costs were estimated to be $75.6 million nationwide. Efficiently increasing the number of records in IIS would require additional resources and careful consideration of various strategies to minimize CPR, such as boosting providers' participation.

  3. Optical assessment of intravascular and intracellular parameters related to tissue viability

    NASA Astrophysics Data System (ADS)

    Mayevsky, Avraham; Sherman, Efrat; Cohen-Kashi, Meir; Dekel, Nava; Pewzner, Eliyahu

    2007-02-01

    Tissue viability represents the balance between O II supply and demand. In our previous paper (Mayevsky et al; Proc.SPIE 6083 : z1-z10, 2006) the HbO II was added to the multiparametric tissue spectroscope (Mayevsky et al J.Biomedical Optics 9:1028-1045,2004). This parameter provides relative values of microcirculatory blood oxygenation (MC-HbO II) evaluated by the 2 wavelength reflectometry principle. The advantage of this approach as compared to pulse oximetry is that the measurement is not dependent of the existence of the pulse of the heart. Also in the MC-HbO II the information is collected from small vessels providing O II to the mitochondria as compared to the pulse oximeter indicating blood oxygenation by the respiratory and cardiovascular systems. In the present study we compared the level of blood oxygenation measured by the pulse oximeter to that measured by the CritiView in the brain exposed to various systemic and localized perturbations of O II supply or demand. We exposed gerbils to anoxia, hypoxia, ischemia and terminal anoxia. In addition we measured mitochondrial NADH (surface fluorometry), tissue reflectance, tissue blood flow (laser Doppler flowmetry) from the same site of MC-HbO II measurement. A clear connection was found between the two blood oxygenation parameters only when systemic perturbations were used (anoxia, hypoxia and terminal anoxia). Under local events (ischemia) the MC-HbO II was responsive while the systemic oxygenation was unchanged. We concluded that MC-HbO II has a significant value in interpretation of tissue energy metabolism under pathophysiological conditions.

  4. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  5. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  6. Data dependent systems approach to modal analysis Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Pandit, S. M.; Mehta, N. P.

    1988-05-01

    The concept of Data Dependent Systems (DDS) and its applicability in the context of modal vibration analysis is presented. The ability of the DDS difference equation models to provide a complete representation of a linear dynamic system from its sampled response data forms the basis of the approach. The models are decomposed into deterministic and stochastic components so that system characteristics are isolated from noise effects. The modelling strategy is outlined, and the method of analysis associated with modal parameter identification is described in detail. Advantages and special features of the DDS methodology are discussed. Since the correlated noise is appropriately and automatically modelled by the DDS, the modal parameters are shown to be estimated very accurately and hence no preprocessing of the data is needed. Complex mode shapes and non-classical damping are as easily analyzed as the classical normal mode analysis. These features are illustrated by using simulated data in this Part I and real data on a disc-brake rotor in Part II.

  7. Contamination of packaged food by substances migrating from a direct-contact plastic layer: Assessment using a generic quantitative household scale methodology.

    PubMed

    Vitrac, Olivier; Challe, Blandine; Leblanc, Jean-Charles; Feigenbaum, Alexandre

    2007-01-01

    The contamination risk in 12 packaged foods by substances released from the plastic contact layer has been evaluated using a novel modeling technique, which predicts the migration that accounts for (i) possible variations in the time of contact between foodstuffs and packaging and (ii) uncertainty in physico-chemical parameters used to predict migration. Contamination data, which are subject to variability and uncertainty, are derived through a stochastic resolution of transport equations, which control the migration into food. Distributions of contact times between packaging materials and foodstuffs were reconstructed from the volumes and frequencies of purchases of a given panel of 6422 households, making assumptions about household storage behaviour. The risk of contamination of the packaged foods was estimated for styrene (a monomer found in polystyrene yogurt pots) and 2,6-di-tert-butyl-4-hydroxytoluene (a representative of the widely used phenolic antioxidants). The results are analysed and discussed regarding sensitivity of the model to the set parameters and chosen assumptions.

  8. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  9. Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series

    PubMed Central

    Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe

    2017-01-01

    Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709

  10. TRIGGERING COLLAPSE OF THE PRESOLAR DENSE CLOUD CORE AND INJECTING SHORT-LIVED RADIOISOTOPES WITH A SHOCK WAVE. II. VARIED SHOCK WAVE AND CLOUD CORE PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boss, Alan P.; Keiser, Sandra A., E-mail: boss@dtm.ciw.edu, E-mail: keiser@dtm.ciw.edu

    2013-06-10

    A variety of stellar sources have been proposed for the origin of the short-lived radioisotopes that existed at the time of the formation of the earliest solar system solids, including Type II supernovae (SNe), asymptotic giant branch (AGB) and super-AGB stars, and Wolf-Rayet star winds. Our previous adaptive mesh hydrodynamics models with the FLASH2.5 code have shown which combinations of shock wave parameters are able to simultaneously trigger the gravitational collapse of a target dense cloud core and inject significant amounts of shock wave gas and dust, showing that thin SN shocks may be uniquely suited for the task. However,more » recent meteoritical studies have weakened the case for a direct SN injection to the presolar cloud, motivating us to re-examine a wider range of shock wave and cloud core parameters, including rotation, in order to better estimate the injection efficiencies for a variety of stellar sources. We find that SN shocks remain as the most promising stellar source, though planetary nebulae resulting from AGB star evolution cannot be conclusively ruled out. Wolf-Rayet (WR) star winds, however, are likely to lead to cloud core shredding, rather than to collapse. Injection efficiencies can be increased when the cloud is rotating about an axis aligned with the direction of the shock wave, by as much as a factor of {approx}10. The amount of gas and dust accreted from the post-shock wind can exceed that injected from the shock wave, with implications for the isotopic abundances expected for a SN source.« less

  11. Adiposity as a full mediator of the influence of cardiorespiratory fitness and inflammation in schoolchildren: The FUPRECOL Study.

    PubMed

    Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R

    2017-06-01

    Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.

  12. Combination of Vlbi, GPS and Slr Observations At The Observation Level For The Realization of Terrestrial and Celestial Reference Frames

    NASA Astrophysics Data System (ADS)

    Andersen, P. H.

    Forsvarets forskningsinstitutt (FFI, the Norwegian Defence Research Establishment) has during the last 17 years developed a software system called GEOSAT, for the analysis of any type of high precision space geodetic observations. A unique feature of GEOSAT is the possibility of combining any combination of different space geode- tic data at the observation level with one consistent model and one consistent strategy. This is a much better strategy than the strategy in use today where different types of observations are processed separately using analysis software developed specifically for each technique. The results from each technique are finally combined a posteriori. In practice the models implemented in the software packages differ at the 1-cm level which is almost one order of magnitude larger than the internal precision of the most precise techniques. Another advantage of the new proposed combination method is that for example VLBI and GPS can use the same tropospheric model with common parameterization. The same is the case for the Earth orientation parameters, the geo- center coordinates and other geodetic or geophysical parameters where VLBI, GPS and SLR can have a common estimate for each of the parameters. The analysis with GEOSAT is automated for the combination of VLBI, SLR and GPS observations. The data are analyzed in batches of one day where the result from each daily arc is a SRIF array (Square Root Information Filter). A large number of SRIF arrays can be combined into a multi-year solution using the CSRIFS program (Com- bination Square Root Information Filter and Smoother). Four parameter levels are available and any parameter can, at each level, either be represented as a constant or a stochastic parameter (white noise, colored noise, or random walk). The batch length (i.e. the time interval between the addition of noise to the SRIF array) can be made time- and parameter dependent. GEOSAT and CSRIFS have been applied in the analysis of selected VLBI and SLR data (LAGEOS I &II) from the period January 1993 to July 2001. A selected number of arcs also include GPS data. Earth orientation parameters, geocenter motion, sta- tion coordinates and velocities were estimated simultaneously with the coordinates of the radio sources and satellite orbital parameters. Recent software improvements and 1 results of analyses will be presented at the meeting. 2

  13. The Impact of Patient Education on Anthropometric, Lipidemic, and Glycemic Parameters Among Patients With Poorly Controlled Type II Diabetes Mellitus: A 3-Month Prospective Single-Center Turkish Study.

    PubMed

    Cander, Soner; Gul, Ozen Oz; Gul, Cuma B; Keles, Saadet B; Yavas, Sibel; Ersoy, Canan

    2014-12-01

    This study evaluated the impact of patient education on adherence to a diabetes care plan (e.g., anthropometric, lipidemic, and glycemic parameters) among adults with type II diabetes mellitus without adequate glycemic control. A total of 61 ambulatory adults with type II diabetes mellitus (mean age: 53.6 ± 8.2 years, 70.5% female) were evaluated for anthropometrics, duration of diabetes mellitus, type of anti-diabetic treatment, blood biochemistry, and glycemic parameters in this 3-month prospective observational single-center study. During the course of the study, participants demonstrated a significant decrease in body weight and fat percentage and HbA1c (p < .001 for each). None of the factors evaluated was a significant determinant for glycemic parameters. These findings revealed that adults with type II diabetes mellitus who received education on adherence to routine self-monitoring of blood glucose, standard diabetic diet, and an exercise program delivered by certified diabetes educators had better glycemic control and significant decrease in body weight and fat percentage over a 3-month monitoring period. Copyright 2014, SLACK Incorporated.

  14. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  15. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  16. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  17. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Probability of success for phase III after exploratory biomarker analysis in phase II.

    PubMed

    Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver

    2017-05-01

    The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Chamber transport for heavy ion fusion

    NASA Astrophysics Data System (ADS)

    Olson, Craig L.

    2014-01-01

    A brief review is given of research on chamber transport for HIF (heavy ion fusion) dating from the first HIF Workshop in 1976 to the present. Chamber transport modes are categorized into ballistic transport modes and channel-like modes. Four major HIF reactor studies are summarized (HIBALL-II, HYLIFE-II, Prometheus-H, OSIRIS), with emphasis on the chamber transport environment. In general, many beams are used to provide the required symmetry and to permit focusing to the required small spots. Target parameters are then discussed, with a summary of the individual heavy ion beam parameters required for HIF. The beam parameters are then classified as to their line charge density and perveance, with special emphasis on the perveance limits for radial space charge spreading, for the space charge limiting current, and for the magnetic (Alfven) limiting current. The major experiments on ballistic transport (SFFE, Sabre beamlets, GAMBLE II, NTX, NDCX) are summarized, with specific reference to the axial electron trapping limit for charge neutralization. The major experiments on channel-like transport (GAMBLE II channel, GAMBLE II self-pinch, LBNL channels, GSI channels) are discussed. The status of current research on HIF chamber transport is summarized, and the value of future NDCX-II transport experiments for the future of HIF is noted.

  20. Estimating system parameters for solvent-water and plant cuticle-water using quantum chemically estimated Abraham solute parameters.

    PubMed

    Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M

    2018-04-18

    Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.

  1. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models: the case study of Denmark.

    PubMed

    Bou Kheir, Rania; Greve, Mogens H; Bøcher, Peder K; Greve, Mette B; Larsen, René; McCloy, Keith

    2010-05-01

    Soil organic carbon (SOC) is one of the most important carbon stocks globally and has large potential to affect global climate. Distribution patterns of SOC in Denmark constitute a nation-wide baseline for studies on soil carbon changes (with respect to Kyoto protocol). This paper predicts and maps the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature, plan curvature, profile curvature, flow accumulation, specific catchment area, tangent slope, tangent curvature, steady-state wetness index, Normalized Difference Vegetation Index (NDVI), Normalized Difference Wetness Index (NDWI) and Soil Color Index (SCI) were generated to statistically explain SOC field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME) and the lowest number of nodes (N) as well are: (i) the tree (T1) combining all of the parameters (ME=29.5%; N=54); (ii) the tree (T2) based on the parent material, soil type and landscape type (ME=31.5%; N=14); and (iii) the tree (T3) constructed using parent material, soil type, landscape type, elevation, tangent slope and SCI (ME=30%; N=39). The produced SOC maps at 1:50,000 cartographic scale using these trees are highly matching with coincidence values equal to 90.5% (Map T1/Map T2), 95% (Map T1/Map T3) and 91% (Map T2/Map T3). The overall accuracies of these maps once compared with field observations were estimated to be 69.54% (Map T1), 68.87% (Map T2) and 69.41% (Map T3). The proposed tree models are relatively simple, and may be also applied to other areas. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Dynamics of cellular level function and regulation derived from murine expression array data.

    PubMed

    de Bivort, Benjamin; Huang, Sui; Bar-Yam, Yaneer

    2004-12-21

    A major open question of systems biology is how genetic and molecular components interact to create phenotypes at the cellular level. Although much recent effort has been dedicated to inferring effective regulatory influences within small networks of genes, the power of microarray bioinformatics has yet to be used to determine functional influences at the cellular level. In all cases of data-driven parameter estimation, the number of model parameters estimable from a set of data is strictly limited by the size of that set. Rather than infer parameters describing the detailed interactions of just a few genes, we chose a larger-scale investigation so that the cumulative effects of all gene interactions could be analyzed to identify the dynamics of cellular-level function. By aggregating genes into large groups with related behaviors (megamodules), we were able to determine the effective aggregate regulatory influences among 12 major gene groups in murine B lymphocytes over a variety of time steps. Intriguing observations about the behavior of cells at this high level of abstraction include: (i) a medium-term critical global transcriptional dependence on ATP-generating genes in the mitochondria, (ii) a longer-term dependence on glycolytic genes, (iii) the dual role of chromatin-reorganizing genes in transcriptional activation and repression, (iv) homeostasis-favoring influences, (v) the indication that, as a group, G protein-mediated signals are not concentration-dependent in their influence on target gene expression, and (vi) short-term-activating/long-term-repressing behavior of the cell-cycle system that reflects its oscillatory behavior.

  3. Estimating outflow facility through pressure dependent pathways of the human eye

    PubMed Central

    Gardiner, Bruce S.

    2017-01-01

    We develop and test a new theory for pressure dependent outflow from the eye. The theory comprises three main parameters: (i) a constant hydraulic conductivity, (ii) an exponential decay constant and (iii) a no-flow intraocular pressure, from which the total pressure dependent outflow, average outflow facilities and local outflow facilities for the whole eye may be evaluated. We use a new notation to specify precisely the meaning of model parameters and so model outputs. Drawing on a range of published data, we apply the theory to animal eyes, enucleated eyes and in vivo human eyes, and demonstrate how to evaluate model parameters. It is shown that the theory can fit high quality experimental data remarkably well. The new theory predicts that outflow facilities and total pressure dependent outflow for the whole eye are more than twice as large as estimates based on the Goldman equation and fluorometric analysis of anterior aqueous outflow. It appears likely that this discrepancy can be largely explained by pseudofacility and aqueous flow through the retinal pigmented epithelium, while any residual discrepancy may be due to pathological processes in aged eyes. The model predicts that if the hydraulic conductivity is too small, or the exponential decay constant is too large, then intraocular eye pressure may become unstable when subjected to normal circadian changes in aqueous production. The model also predicts relationships between variables that may be helpful when planning future experiments, and the model generates many novel testable hypotheses. With additional research, the analysis described here may find application in the differential diagnosis, prognosis and monitoring of glaucoma. PMID:29261696

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latour, M.; Fontaine, G.; Brassard, P.

    As part of a multifaceted effort to better exploit the asteroseismological potential of the pulsating sdB star Feige 48, we present an improved spectroscopic analysis of that star based on new grids of NLTE, fully line-blanketed model atmospheres. To that end, we gathered four high signal-to-noise ratio time-averaged optical spectra of varying spectral resolutions from 1.0 Å to 8.7 Å, and we made use of the results of four independent studies to fix the abundances of the most important metals in the atmosphere of Feige 48. The mean atmospheric parameters we obtained from our four spectra of Feige 48 are:more » T {sub eff} = 29,850 ± 60 K, log g = 5.46 ± 0.01, and log N(He)/N(H) = –2.88 ± 0.02. We also modeled, for the first time, the He II line at 1640 Å from the STIS archive spectrum of the star, and with this line we found an effective temperature and a surface gravity that match well with the values obtained with the optical data. With some fine tuning of the abundances of the metals visible in the optical domain, we were able to achieve a very good agreement between our best available spectrum and our best-fitting synthetic one. Our derived atmospheric parameters for Feige 48 are in rather good agreement with previous estimates based on less sophisticated models. This underlines the relatively small effects of the NLTE approach combined with line blanketing in the atmosphere of this particular star, implying that the current estimates of the atmospheric parameters of Feige 48 are reliable and secure.« less

  5. Evaluation of metal biouptake from the analysis of bulk metal depletion kinetics at various cell concentrations: theory and application.

    PubMed

    Rotureau, Elise; Billard, Patrick; Duval, Jérôme F L

    2015-01-20

    Bioavailability of trace metals is a key parameter for assessment of toxicity on living organisms. Proper evaluation of metal bioavailability requires monitoring the various interfacial processes that control metal partitioning dynamics at the biointerface, which includes metal transport from solution to cell membrane, adsorption at the biosurface, internalization, and possible excretion. In this work, a methodology is proposed to quantitatively describe the dynamics of Cd(II) uptake by Pseudomonas putida. The analysis is based on the kinetic measurement of Cd(II) depletion from bulk solution at various initial cell concentrations using electroanalytical probes. On the basis of a recent formalism on the dynamics of metal uptake by complex biointerphases, the cell concentration-dependent depletion time scales and plateau values reached by metal concentrations at long exposure times (>3 h) are successfully rationalized in terms of limiting metal uptake flux, rate of excretion, and metal affinity to internalization sites. The analysis shows the limits of approximate depletion models valid in the extremes of high and weak metal affinities. The contribution of conductive diffusion transfer of metals from the solution to the cell membrane in governing the rate of Cd(II) uptake is further discussed on the basis of estimated resistances for metal membrane transfer and extracellular mass transport.

  6. Cloning, overexpression, purification and preliminary crystallographic studies of a mitochondrial type II peroxiredoxin from Pisum sativum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barranco-Medina, Sergio; López-Jaramillo, Francisco Javier, E-mail: fjljara@ugr.es; Bernier-Villamor, Laura

    2006-07-01

    The isolation, purification, crystallization and molecular-replacement solution of mitochondrial type II peroxiredoxin from P. sativum is reported. A cDNA encoding an open reading frame of 199 amino acids corresponding to a type II peroxiredoxin from Pisum sativum with its transit peptide was isolated by RT-PCR. The 171-amino-acid mature protein (estimated molecular weight 18.6 kDa) was cloned into the pET3d vector and overexpressed in Escherichia coli. The recombinant protein was purified and crystallized by the hanging-drop vapour-diffusion technique. A full data set (98.2% completeness) was collected using a rotating-anode generator to a resolution of 2.8 Å from a single crystal flash-cooledmore » at 100 K. X-ray data revealed that the protein crystallizes in space group P1, with unit-cell parameters a = 61.88, b = 66.40, c = 77.23 Å, α = 102.90, β = 104.40, γ = 99.07°, and molecular replacement using a theoretical model predicted from the primary structure as a search model confirmed the presence of six molecules in the unit cell as expected from the Matthews coefficient. Refinement of the structure is in progress.« less

  7. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  8. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  9. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  10. Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process

    NASA Astrophysics Data System (ADS)

    Nakanishi, W.; Fuse, T.; Ishikawa, T.

    2015-05-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.

  11. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    PubMed Central

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  12. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  13. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  14. Multi-physics and multi-scale characterization of shale anisotropy

    NASA Astrophysics Data System (ADS)

    Sarout, J.; Nadri, D.; Delle Piane, C.; Esteban, L.; Dewhurst, D.; Clennell, M. B.

    2012-12-01

    Shales are the most abundant sedimentary rock type in the Earth's shallow crust. In the past decade or so, they have attracted increased attention from the petroleum industry as reservoirs, as well as more traditionally for their sealing capacity for hydrocarbon/CO2 traps or underground waste repositories. The effectiveness of both fundamental and applied shale research is currently limited by (i) the extreme variability of physical, mechanical and chemical properties observed for these rocks, and by (ii) the scarce data currently available. The variability in observed properties is poorly understood due to many factors that are often irrelevant for other sedimentary rocks. The relationships between these properties and the petrophysical measurements performed at the field and laboratory scales are not straightforward, translating to a scale dependency typical of shale behaviour. In addition, the complex and often anisotropic micro-/meso-structures of shales give rise to a directional dependency of some of the measured physical properties that are tensorial by nature such as permeability or elastic stiffness. Currently, fundamental understanding of the parameters controlling the directional and scale dependency of shale properties is far from complete. Selected results of a multi-physics laboratory investigation of the directional and scale dependency of some critical shale properties are reported. In particular, anisotropic features of shale micro-/meso-structures are related to the directional-dependency of elastic and fluid transport properties: - Micro-/meso-structure (μm to cm scale) characterization by electron microscopy and X-ray tomography; - Estimation of elastic anisotropy parameters on a single specimen using elastic wave propagation (cm scale); - Estimation of the permeability tensor using the steady-state method on orthogonal specimens (cm scale); - Estimation of the low-frequency diffusivity tensor using NMR method on orthogonal specimens (<μm scale). For each of the above properties, leading-edge experimental techniques have been associated with novel interpretation tools. In this contribution, these experimental and interpretation methods are described. Relationships between the measured properties and the corresponding micro-/meso-structural features are discussed. For example, P-wave velocity was measured along 100 different propagation paths on a single cylindrical shale specimen using miniature ultrasonic transducers. Assuming that (i) the elastic tensor of this shale is transversely isotropic; and (i) the sample has been cored perfectly perpendicular to the bedding plane (symmetry plane is horizontal), Thomsen's anisotropy parameters inverted from the measured velocities are: - P-wave velocity along the symmetry axis (perpendicular to the bedding plane) αo=3.45km/s; - P-wave anisotropy ɛ=0.12; - Parameter controlling the wave front geometry δ=0.058. A novel inversion algorithm allows for recovering these parameters without assuming a priori a horizontal bedding (symmetry) plane. The inversion of the same data set using this algorithm yields (i) αo=3.23km/s, ɛ=0.25 and δ=0.18, and (ii) the elastic symmetry axis is inclined of ω=30° with respect to the specimen's axis. Such difference can have strong impact on field applications (AVO, ray tracing, tomography).

  15. Effect of urine urea nitrogen and protein intake adjusted by using the estimated urine creatinine excretion rate on the antiproteinuric effect of angiotensin II type I receptor blockers.

    PubMed

    Chin, Ho Jun; Kim, Dong Ki; Park, Jung Hwan; Shin, Sung Joon; Lee, Sang Ho; Choi, Bum Soon; Kim, Suhnggwon; Lim, Chun Soo

    2015-01-01

    The aim of this study was to determine the role of protein intake on proteinuria in chronic kidney disease (CKD), as it is presently not conclusive. This is a subanalysis of data from an open-label, case-controlled, randomized clinical trial on education about low-salt diets (NCT01552954). We estimated the urine excretion rate of parameters in a day, adjusted by using the equation for estimating urine creatinine excretion, and analyzed the effect of urine urea nitrogen (UUN), as well as estimating protein intake on the level of albuminuria in hypertensive patients with chronic kidney disease. Among 174 participants from whom complete 24-h urine specimens were collected, the estimates from the Tanaka equation resulted in the highest accuracy for the urinary excretion rate of creatinine, sodium, albumin, and UUN. Among 227 participants, the baseline value of estimated urine albumin excretion (eUalb) was positively correlated with the estimated UUN (eUUN) or protein intake according to eUUN (P = 0.012 and P = 0.038, respectively). We were able to calculate the ratios of eUalb and eUUN in 221 participants and grouped them according to the ratio of eUUN during 16-wk trial period. The proportion of patients that achieved a decrement of eUalb ≥25% during 16 wk with an angiotensin II type I receptor blocker (ARB) medication was 80% (24 of 30) in group 1, with eUUN ratio ≤-25%; 82.2% (111 of 135) in group 2, with eUUN ratio between -25% and 25%; and 66.1% (37 and 56) in group 3, with eUUN ratio ≥25% (P = 0.048). The probability of a decrease in albuminuria with ARB treatment was lower in patients with an increase of eUUN or protein intake during the 16 wk of ARB treatment, as observed in multiple logistic regression analysis as well. The estimated urine urea excretion rate showed a positive association with the level of albuminuria in hypertensive patients with chronic kidney disease. The increase of eUUN excretion ameliorated the antiproteinuric effect of ARB. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  17. INTERSPECIES CORRELATION ESTIMATION (ICE) FOR ACUTE TOXICITY TO AQUATIC ORGANISMS AND WILDLIFE. II. USER MANUAL AND SOFTWARE

    EPA Science Inventory

    Asfaw, Amha, Mark R. Ellersieck and Foster L. Mayer. 2003. Interspecies Correlation Estimations (ICE) for Acute Toxicity to Aquatic Organisms and Wildlife. II. User Manual and Software. EPA/600/R-03/106. U.S. Environmental Protection Agency, National Health and Environmental Effe...

  18. Impacts of different types of measurements on estimating unsaturated flow parameters

    NASA Astrophysics Data System (ADS)

    Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru

    2015-05-01

    This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  19. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    NASA Astrophysics Data System (ADS)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  20. Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment

    PubMed Central

    DeBlasio, Dan

    2013-01-01

    Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379

Top