Sample records for linear combination fitting

  1. Synchrotron speciation data for zero-valent iron nanoparticles

    EPA Pesticide Factsheets

    This data set encompasses a complete analysis of synchrotron speciation data for 5 iron nanoparticle samples (P1, P2, P3, S1, S2, and metallic iron) to include linear combination fitting results (Table 6 and Figure 9) and ab-initio extended x-ray absorption fine structure spectroscopy fitting (Figure 10 and Table 7).Table 6: Linear combination fitting of the XAS data for the 5 commercial nZVI/ZVI products tested. Species proportions are presented as percentages. Goodness of fit is indicated by the chi^2 value.Figure 9: Normalised Fe K-edge k3-weighted EXAFS of the 5 commercial nZVI/ZVIproducts tested. Dotted lines show the best 4-component linear combination fit ofreference spectra.Figure 10: Fourier transformed radial distribution functions (RDFs) of the five samplesand an iron metal foil. The black lines in Fig. 10 represent the sample data and the reddotted curves represent the non-linear fitting results of the EXAFS data.Table 7: Coordination parameters of Fe in the samples.This dataset is associated with the following publication:Chekli, L., B. Bayatsarmadi, R. Sekine, B. Sarkar, A. Maoz Shen, K. Scheckel , W. Skinner, R. Naidu, H. Shon, E. Lombi, and E. Donner. Analytical Characterisation of Nanoscale Zero-Valent Iron: A Methodological Review. Richard P. Baldwin ANALYTICA CHIMICA ACTA. Elsevier Science Ltd, New York, NY, USA, 903: 13-35, (2016).

  2. Effects of combined linear and nonlinear periodic training on physical fitness and competition times in finswimmers.

    PubMed

    Yu, Kyung-Hun; Suk, Min-Hwa; Kang, Shin-Woo; Shin, Yun-A

    2014-10-01

    The purpose of this study was to investigate the effect of combined linear and nonlinear periodic training on physical fitness and competition times in finswimmers. The linear resistance training model (6 days/week) and nonlinear underwater training (4 days/week) were applied to 12 finswimmers (age, 16.08± 1.44 yr; career, 3.78± 1.90 yr) for 12 weeks. Body composition measures included weight, body mass index (BMI), percent fat, and fat-free mass. Physical fitness measures included trunk flexion forward, trunk extension backward, sargent jump, 1-repetition-maximum (1 RM) squat, 1 RM dead lift, knee extension, knee flexion, trunk extension, trunk flexion, and competition times. Body composition and physical fitness were improved after the 12-week periodic training program. Weight, BMI, and percent fat were significantly decreased, and trunk flexion forward, trunk extension backward, sargent jump, 1 RM squat, 1 RM dead lift, and knee extension (right) were significantly increased. The 50- and 100-m times significantly decreased in all 12 athletes. After 12 weeks of training, all finswimmers who participated in this study improved their times in a public competition. These data indicate that combined linear and nonlinear periodic training enhanced the physical fitness and competition times in finswimmers.

  3. Using Stocking or Harvesting to Reverse Period-Doubling Bifurcations in Discrete Population Models

    Treesearch

    James F. Selgrade

    1998-01-01

    This study considers a general class of 2-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Four sets of necessary...

  4. Reversing Period-Doubling Bifurcations in Models of Population Interactions Using Constant Stocking or Harvesting

    Treesearch

    James F. Selgrade; James H. Roberds

    1998-01-01

    This study considers a general class of two-dimensional, discrete population models where each per capita transition function (fitness) depends on a linear combination of the densities of the interacting populations. The fitness functions are either monotone decreasing functions (pioneer fitnesses) or one-humped functions (climax fitnesses). Conditions are derived...

  5. Quantification of Liver Proton-Density Fat Fraction in an 7.1 Tesla preclinical MR Systems: Impact of the Fitting Technique

    PubMed Central

    Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP

    2016-01-01

    Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806

  6. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  7. Linear Combination Fitting (LCF)-XANES analysis of As speciation in selected mine-impacted materials

    EPA Pesticide Factsheets

    This table provides sample identification labels and classification of sample type (tailings, calcinated, grey slime). For each sample, total arsenic and iron concentrations determined by acid digestion and ICP analysis are provided along with arsenic in-vitro bioaccessibility (As IVBA) values to estimate arsenic risk. Lastly, the table provides linear combination fitting results from synchrotron XANES analysis showing the distribution of arsenic speciation phases present in each sample along with fitting error (R-factor).This dataset is associated with the following publication:Ollson, C., E. Smith, K. Scheckel, A. Betts, and A. Juhasz. Assessment of arsenic speciation and bioaccessibility in mine-impacted materials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 313: 130-137, (2016).

  8. A policy-capturing study of the simultaneous effects of fit with jobs, groups, and organizations.

    PubMed

    Kristof-Brown, Amy L; Jansen, Karen J; Colbert, Amy E

    2002-10-01

    The authors report an experimental policy-capturing study that examines the simultaneous impact of person-job (PJ), person-group (PG), and person-organization (PO) fit on work satisfaction. Using hierarchical linear modeling, the authors determined that all 3 types of fit had important, independent effects on satisfaction. Work experience explained systematic differences in how participants weighted each type of fit. Multiple interactions also showed participants used complex strategies for combining fit cues.

  9. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  10. Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform

    PubMed Central

    Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.

    2015-01-01

    This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037

  11. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  12. Fitting and forecasting coupled dark energy in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, Santiago; Amendola, Luca; Pettorino, Valeria

    2016-01-01

    We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less

  13. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    DTIC Science & Technology

    2010-01-01

    proposed by Pasion and Oldenburg [25]: Q(t) = kt−βe−γt. (10) Various combinations of these fitting parameters can be used as inputs to classifier... Pasion -Oldenburg parameters k, β, and γ for each anomaly by a direct nonlinear least-squares fit of (10) and by linear (pseudo)inversion of its...combinations of the Pasion -Oldenburg parameters. Com- bining k and γ yields results similar to those of k and R, as Figure 7 and Table 2 show. Figure 8 and

  14. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  15. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  16. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  17. A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2007-09-01

    Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.

  18. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  19. Analytical methods in multivariate highway safety exposure data estimation

    DOT National Transportation Integrated Search

    1984-01-01

    Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...

  20. [ASSOCIATION BETWEEN HEALTH RELATED QUALITY OF LIFE, BODYWEIGHT STATUS (BMI) AND PHYSICAL ACTIVITY AND FITNESS LEVELS IN CHILEAN ADOLESCENTS].

    PubMed

    García-Rubio, Javier; Olivares, Pedro R; Lopez-Legarrea, Patricia; Gómez-Campos, Rossana; Cossio-Bolaños, Marco A; Merellano-Navarro, Eugenio

    2015-10-01

    the objective of this study was to analyze the potential relationships between Health Related Quality of Life (HRQoL) with weight status, physical activity (PA) and fitness in Chilean adolescents in both, independent and combined analysis. a sample of 767 participants (47.5% females) and aged between 12 and 18 (mean age 15.5) was employed. All measurements were carried out using selfreported instruments and Kidscreen-10, iPAQ and IFIS were used to assess HRQoL, PA and Fitness respectively. One factor ANOVA and linear regression models were applied to analyze associations between HRQoL, weight status, PA and fitness using age and sex as confounders. body mass index, level of PA and fitness were independently associated with HRQoL in Chilean adolescents. However, the combined and adjusted by sex and age analysis of these associations showed that only the fitness was significantly related with HRQoL. general fitness is associated with HRQoL independently of sex, age, bodyweight status and level of PA. The relationship between nutritional status and weekly PA with HRQoL are mediated by sex, age and general fitness. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  1. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  2. The development of a combined mathematical model to forecast the incidence of hepatitis E in Shanghai, China.

    PubMed

    Ren, Hong; Li, Jian; Yuan, Zheng-An; Hu, Jia-Yu; Yu, Yan; Lu, Yi-Han

    2013-09-08

    Sporadic hepatitis E has become an important public health concern in China. Accurate forecasting of the incidence of hepatitis E is needed to better plan future medical needs. Few mathematical models can be used because hepatitis E morbidity data has both linear and nonlinear patterns. We developed a combined mathematical model using an autoregressive integrated moving average model (ARIMA) and a back propagation neural network (BPNN) to forecast the incidence of hepatitis E. The morbidity data of hepatitis E in Shanghai from 2000 to 2012 were retrieved from the China Information System for Disease Control and Prevention. The ARIMA-BPNN combined model was trained with 144 months of morbidity data from January 2000 to December 2011, validated with 12 months of data January 2012 to December 2012, and then employed to forecast hepatitis E incidence January 2013 to December 2013 in Shanghai. Residual analysis, Root Mean Square Error (RMSE), normalized Bayesian Information Criterion (BIC), and stationary R square methods were used to compare the goodness-of-fit among ARIMA models. The Bayesian regularization back-propagation algorithm was used to train the network. The mean error rate (MER) was used to assess the validity of the combined model. A total of 7,489 hepatitis E cases was reported in Shanghai from 2000 to 2012. Goodness-of-fit (stationary R2=0.531, BIC= -4.768, Ljung-Box Q statistics=15.59, P=0.482) and parameter estimates were used to determine the best-fitting model as ARIMA (0,1,1)×(0,1,1)12. Predicted morbidity values in 2012 from best-fitting ARIMA model and actual morbidity data from 2000 to 2011 were used to further construct the combined model. The MER of the ARIMA model and the ARIMA-BPNN combined model were 0.250 and 0.176, respectively. The forecasted incidence of hepatitis E in 2013 was 0.095 to 0.372 per 100,000 population. There was a seasonal variation with a peak during January-March and a nadir during August-October. Time series analysis suggested a seasonal pattern of hepatitis E morbidity in Shanghai, China. An ARIMA-BPNN combined model was used to fit the linear and nonlinear patterns of time series data, and accurately forecast hepatitis E infections.

  3. Investigation of empirical damping laws for the space shuttle

    NASA Technical Reports Server (NTRS)

    Bernstein, E. L.

    1973-01-01

    An analysis of dynamic test data from vibration testing of a number of aerospace vehicles was made to develop an empirical structural damping law. A systematic attempt was made to fit dissipated energy/cycle to combinations of all dynamic variables. The best-fit laws for bending, torsion, and longitudinal motion are given, with error bounds. A discussion and estimate are made of error sources. Programs are developed for predicting equivalent linear structural damping coefficients and finding the response of nonlinearly damped structures.

  4. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  5. Comparison of the A-Cc curve fitting methods in determining maximum ribulose 1.5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration.

    PubMed

    Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei

    2009-02-01

    A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.

  6. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    NASA Astrophysics Data System (ADS)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  7. Analytical solution of Luedeking-Piret equation for a batch fermentation obeying Monod growth kinetics.

    PubMed

    Garnier, Alain; Gaillet, Bruno

    2015-12-01

    Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.

  8. Planar Cubics Through a Point in a Direction

    NASA Technical Reports Server (NTRS)

    Chou, J. J.; Blake, M. W.

    1993-01-01

    It is shown that the planar cubics through three points and the associated tangent directions can be found by solving a cubic equation and a 2 x 2 system of linear equations. The result is combined with a previous published scheme to produce a better curve-fitting method.

  9. Comparison of all atom, continuum, and linear fitting empirical models for charge screening effect of aqueous medium surrounding a protein molecule

    NASA Astrophysics Data System (ADS)

    Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki

    2002-05-01

    To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.

  10. Data Series Subtraction with Unknown and Unmodeled Background Noise

    NASA Technical Reports Server (NTRS)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  11. Non-linearity of the collagen triple helix in solution and implications for collagen function.

    PubMed

    Walker, Kenneth T; Nan, Ruodan; Wright, David W; Gor, Jayesh; Bishop, Anthony C; Makhatadze, George I; Brodsky, Barbara; Perkins, Stephen J

    2017-06-16

    Collagen adopts a characteristic supercoiled triple helical conformation which requires a repeating (Xaa-Yaa-Gly) n sequence. Despite the abundance of collagen, a combined experimental and atomistic modelling approach has not so far quantitated the degree of flexibility seen experimentally in the solution structures of collagen triple helices. To address this question, we report an experimental study on the flexibility of varying lengths of collagen triple helical peptides, composed of six, eight, ten and twelve repeats of the most stable Pro-Hyp-Gly (POG) units. In addition, one unblocked peptide, (POG) 10unblocked , was compared with the blocked (POG) 10 as a control for the significance of end effects. Complementary analytical ultracentrifugation and synchrotron small angle X-ray scattering data showed that the conformations of the longer triple helical peptides were not well explained by a linear structure derived from crystallography. To interpret these data, molecular dynamics simulations were used to generate 50 000 physically realistic collagen structures for each of the helices. These structures were fitted against their respective scattering data to reveal the best fitting structures from this large ensemble of possible helix structures. This curve fitting confirmed a small degree of non-linearity to exist in these best fit triple helices, with the degree of bending approximated as 4-17° from linearity. Our results open the way for further studies of other collagen triple helices with different sequences and stabilities in order to clarify the role of molecular rigidity and flexibility in collagen extracellular and immune function and disease. © 2017 The Author(s).

  12. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  13. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  14. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  15. Adiposity as a full mediator of the influence of cardiorespiratory fitness and inflammation in schoolchildren: The FUPRECOL Study.

    PubMed

    Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R

    2017-06-01

    Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.

  16. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  17. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    PubMed

    Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  18. Testing the dose-response specification in epidemiology: public health and policy consequences for lead.

    PubMed

    Rothenberg, Stephen J; Rothenberg, Jesse C

    2005-09-01

    Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.

  19. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  20. The relationship of gross upper and lower limb motor competence to measures of health and fitness in adolescents aged 13-14 years.

    PubMed

    Weedon, Benjamin David; Liu, Francesca; Mahmoud, Wala; Metz, Renske; Beunder, Kyle; Delextrat, Anne; Morris, Martyn G; Esser, Patrick; Collett, Johnny; Meaney, Andy; Howells, Ken; Dawes, Helen

    2018-01-01

    Motor competence (MC) is an important factor in the development of health and fitness in adolescence. This cross-sectional study aims to explore the distribution of MC across school students aged 13-14 years old and the extent of the relationship of MC to measures of health and fitness across genders. A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13-14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=-2.21, degrees of freedom=307, P=0.03, 95% CI -0.253 to -0.011). Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. NCT02517333.

  1. Non-Targeted Effects and the Dose Response for Heavy Ion Tumorigenesis

    NASA Technical Reports Server (NTRS)

    Chappelli, Lori J.; Cucinotta, Francis A.

    2010-01-01

    BACKGROUND: There is no human epidemiology data available to estimate the heavy ion cancer risks experienced by astronauts in space. Studies of tumor induction in mice are a necessary step to estimate risks to astronauts. Previous experimental data can be better utilized to model dose response for heavy ion tumorigenesis and plan future low dose studies. DOSE RESPONSE MODELS: The Harderian Gland data of Alpen et al.[1-3] was re-analyzed [4] using non-linear least square regression. The data set measured the induction of Harderian gland tumors in mice by high-energy protons, helium, neon, iron, niobium and lanthanum with LET s ranging from 0.4 to 950 keV/micron. We were able to strengthen the individual ion models by combining data for all ions into a model that relates both radiation dose and LET for the ion to tumor prevalence. We compared models based on Targeted Effects (TE) to one motivated by Non-targeted Effects (NTE) that included a bystander term that increased tumor induction at low doses non-linearly. When comparing fitted models to the experimental data, we considered the adjusted R2, the Akaike Information Criteria (AIC), and the Bayesian Information Criteria (BIC) to test for Goodness of fit.In the adjusted R2test, the model with the highest R2values provides a better fit to the available data. In the AIC and BIC tests, the model with the smaller values of the summary value provides the better fit. The non-linear NTE models fit the combined data better than the TE models that are linear at low doses. We evaluated the differences in the relative biological effectiveness (RBE) and found the NTE model provides a higher RBE at low dose compared to the TE model. POWER ANALYSIS: The final NTE model estimates were used to simulate example data to consider the design of new experiments to detect NTE at low dose for validation. Power and sample sizes were calculated for a variety of radiation qualities including some not considered in the Harderian Gland data set and with different background tumor incidences. We considered different experimental designs with varying number of doses and varying low doses dependant on the LET of the radiation. The optimal design to detect a NTE for an individual ion had 4 doses equally spaced below a maximal dose where bending due to cell sterilization was < 2%. For example at 100 keV/micron we would irradiate at 0.03 Gy, 0.065 Gy, 0.13 Gy, and 0.26 Gy and require 850 mice including a control dose for a sensitivity to detect NTE with 80% power. Sample sizes could be improved by combining ions similar to the methods used with the Harderian Gland data.

  2. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  3. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  4. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less

  5. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    PubMed

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  6. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  7. Linear Models for Systematics and Nuisances

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.

    2017-12-01

    The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.

  8. Linear combination fitting results for lead speciation in amended soils

    EPA Pesticide Factsheets

    Table listing the location, amendment type, distribution (percentage) of lead phases identified, and fitting error (R-factor). BM=bone meal, FB=fish bone, DAP=diammonium phosphate, MAP=monoammonium phosphate, TSP=triple super phosphate, PL=poultry litterThis dataset is associated with the following publication:Obrycki, J., N. Basta, K. Scheckel , B. Stevens, and K. Minca. Phosphorus Amendment Efficacy for In Situ Remediation of Soil Lead Depends on the Bioaccessible Method. Elizabeth Guertal, David Myroid, and C. Wayne Smith JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI, USA, 45(1): 37-44, (2016).

  9. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  10. The relationship of gross upper and lower limb motor competence to measures of health and fitness in adolescents aged 13–14 years

    PubMed Central

    Liu, Francesca; Mahmoud, Wala; Metz, Renske; Beunder, Kyle; Delextrat, Anne; Morris, Martyn G; Esser, Patrick; Collett, Johnny; Meaney, Andy; Howells, Ken; Dawes, Helen

    2018-01-01

    Introduction Motor competence (MC) is an important factor in the development of health and fitness in adolescence. Aims This cross-sectional study aims to explore the distribution of MC across school students aged 13–14 years old and the extent of the relationship of MC to measures of health and fitness across genders. Methods A total of 718 participants were tested from three different schools in the UK, 311 girls and 407 boys (aged 13–14 years), pairwise deletion for correlation variables reduced this to 555 (245 girls, 310 boys). Assessments consisted of body mass index, aerobic capacity, anaerobic power, and upper limb and lower limb MC. The distribution of MC and the strength of the relationships between MC and health/fitness measures were explored. Results Girls performed lower for MC and health/fitness measures compared with boys. Both measures of MC showed a normal distribution and a significant linear relationship of MC to all health and fitness measures for boys, girls and combined genders. A stronger relationship was reported for upper limb MC and aerobic capacity when compared with lower limb MC and aerobic capacity in boys (t=−2.21, degrees of freedom=307, P=0.03, 95% CI −0.253 to –0.011). Conclusion Normally distributed measures of upper and lower limb MC are linearly related to health and fitness measures in adolescents in a UK sample. Trial registration number NCT02517333. PMID:29629179

  11. Binocular combination of stimulus orientation.

    PubMed

    Yehezkel, O; Ding, J; Sterkin, A; Polat, U; Levi, D M

    2016-11-01

    When two sine waves that differ slightly in orientation are presented to the two eyes separately, a single cyclopean sine wave is perceived. However, it is unclear how the brain calculates its orientation. Here, we used a signal detection rating method to estimate the perceived orientation when the two eyes were presented with Gabor patches that differed in both orientation and contrast. We found a nearly linear combination of orientation when both targets had the same contrast. However, the binocular percept shifted away from the linear prediction towards the orientation with the higher contrast, depending on both the base contrast and the contrast ratio. We found that stimuli that differ slightly in orientation are combined into a single percept, similarly for monocular and binocular presentation, with a bias that depends on the interocular contrast ratio. Our results are well fitted by gain-control models, and are consistent with a previous study that favoured the DSKL model that successfully predicts binocular phase and contrast combination and binocular contrast discrimination. In this model, the departures from linearity may be explained on the basis of mutual suppression and mutual enhancement, both of which are stronger under dichoptic than monocular conditions.

  12. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  13. Characterization of Sulfur Compounds in Coffee Beans by Sulfur K-XANES Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtenberg, H.; Hormes, J.; Institute of Physics, University of Bonn, Nussallee 12, 53115 Bonn

    2007-02-02

    In this 'feasibility study' the influence of roasting on the sulfur speciation in Mexican coffee beans was investigated by sulfur K-XANES Spectroscopy. Spectra of green and slightly roasted beans could be fitted to a linear combination of 'standard' reference spectra for biological samples, whereas longer roasting obviously involves formation of additional sulfur compounds in considerable amounts.

  14. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  15. Determining the turnover time of groundwater systems with the aid of environmental tracers. 1. Models and their applicability

    NASA Astrophysics Data System (ADS)

    Małoszewski, P.; Zuber, A.

    1982-06-01

    Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.

  16. A quasi-chemical model for the growth and death of microorganisms in foods by non-thermal and high-pressure processing.

    PubMed

    Doona, Christopher J; Feeherry, Florence E; Ross, Edward W

    2005-04-15

    Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.

  17. Trait-fitness relationships determine how trade-off shapes affect species coexistence.

    PubMed

    Ehrlich, Elias; Becks, Lutz; Gaedke, Ursula

    2017-12-01

    Trade-offs between functional traits are ubiquitous in nature and can promote species coexistence depending on their shape. Classic theory predicts that convex trade-offs facilitate coexistence of specialized species with extreme trait values (extreme species) while concave trade-offs promote species with intermediate trait values (intermediate species). We show here that this prediction becomes insufficient when the traits translate non-linearly into fitness which frequently occurs in nature, e.g., an increasing length of spines reduces grazing losses only up to a certain threshold resulting in a saturating or sigmoid trait-fitness function. We present a novel, general approach to evaluate the effect of different trade-off shapes on species coexistence. We compare the trade-off curve to the invasion boundary of an intermediate species invading the two extreme species. At this boundary, the invasion fitness is zero. Thus, it separates trait combinations where invasion is or is not possible. The invasion boundary is calculated based on measurable trait-fitness relationships. If at least one of these relationships is not linear, the invasion boundary becomes non-linear, implying that convex and concave trade-offs not necessarily lead to different coexistence patterns. Therefore, we suggest a new ecological classification of trade-offs into extreme-favoring and intermediate-favoring which differs from a purely mathematical description of their shape. We apply our approach to a well-established model of an empirical predator-prey system with competing prey types facing a trade-off between edibility and half-saturation constant for nutrient uptake. We show that the survival of the intermediate prey depends on the convexity of the trade-off. Overall, our approach provides a general tool to make a priori predictions on the outcome of competition among species facing a common trade-off in dependence of the shape of the trade-off and the shape of the trait-fitness relationships. © 2017 by the Ecological Society of America.

  18. Constraints on deep moonquake focal mechanisms through analyses of tidal stress

    USGS Publications Warehouse

    Weber, R.C.; Bills, B.G.; Johnson, C.L.

    2009-01-01

    [1] A relationship between deep moonquake occurrence and tidal forcing is suggested by the monthly periodicities observed in the occurrence times of events recorded by the Apollo Passive Seismic Experiment. In addition, the typically large S wave to P wave arrival amplitude ratios observed on deep moonquake seismograms are indicative of shear failure. Tidal stress, induced in the lunar interior by the gravitational influence of the Earth, may influence moonquake activity. We investigate the relationship between tidal stress and deep moonquake occurrence by searching for a linear combination of the normal and shear components of tidal stress that best approximates a constant value when evaluated at the times of moonquakes from 39 different moonquake clusters. We perform a grid search at each cluster location, computing the stresses resolved onto a suite of possible failure planes, to obtain the best fitting fault orientation at each location. We find that while linear combinations of stresses (and in some cases stress rates) can fit moonquake occurrence at many clusters quite well; for other clusters, the fit is not strongly dependent on plane orientation. This suggests that deep moonquakes may occur in response to factors other than, or in addition to, tidal stress. Several of our inferences support the hypothesis that deep moonquakes might be related to transformational faulting, in which shear failure is induced by mineral phase changes at depth. The occurrence of this process would have important implications for the lunar interior. Copyright 2009 by the American Geophysical Union.

  19. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  20. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  1. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  2. Joint estimation of preferential attachment and node fitness in growing complex networks

    NASA Astrophysics Data System (ADS)

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-09-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.

  3. Joint estimation of preferential attachment and node fitness in growing complex networks

    PubMed Central

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-01-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314

  4. Molecular Clock of Neutral Mutations in a Fitness-Increasing Evolutionary Process

    PubMed Central

    Iijima, Leo; Suzuki, Shingo; Hashimoto, Tomomi; Oyake, Ayana; Kobayashi, Hisaka; Someya, Yuki; Narisawa, Dai; Yomo, Tetsuya

    2015-01-01

    The molecular clock of neutral mutations, which represents linear mutation fixation over generations, is theoretically explained by genetic drift in fitness-steady evolution or hitchhiking in adaptive evolution. The present study is the first experimental demonstration for the molecular clock of neutral mutations in a fitness-increasing evolutionary process. The dynamics of genome mutation fixation in the thermal adaptive evolution of Escherichia coli were evaluated in a prolonged evolution experiment in duplicated lineages. The cells from the continuously fitness-increasing evolutionary process were subjected to genome sequencing and analyzed at both the population and single-colony levels. Although the dynamics of genome mutation fixation were complicated by the combination of the stochastic appearance of adaptive mutations and clonal interference, the mutation fixation in the population was simply linear over generations. Each genome in the population accumulated 1.6 synonymous and 3.1 non-synonymous neutral mutations, on average, by the spontaneous mutation accumulation rate, while only a single genome in the population occasionally acquired an adaptive mutation. The neutral mutations that preexisted on the single genome hitchhiked on the domination of the adaptive mutation. The successive fixation processes of the 128 mutations demonstrated that hitchhiking and not genetic drift were responsible for the coincidence of the spontaneous mutation accumulation rate in the genome with the fixation rate of neutral mutations in the population. The molecular clock of neutral mutations to the fitness-increasing evolution suggests that the numerous neutral mutations observed in molecular phylogenetic trees may not always have been fixed in fitness-steady evolution but in adaptive evolution. PMID:26177190

  5. Molecular Clock of Neutral Mutations in a Fitness-Increasing Evolutionary Process.

    PubMed

    Kishimoto, Toshihiko; Ying, Bei-Wen; Tsuru, Saburo; Iijima, Leo; Suzuki, Shingo; Hashimoto, Tomomi; Oyake, Ayana; Kobayashi, Hisaka; Someya, Yuki; Narisawa, Dai; Yomo, Tetsuya

    2015-07-01

    The molecular clock of neutral mutations, which represents linear mutation fixation over generations, is theoretically explained by genetic drift in fitness-steady evolution or hitchhiking in adaptive evolution. The present study is the first experimental demonstration for the molecular clock of neutral mutations in a fitness-increasing evolutionary process. The dynamics of genome mutation fixation in the thermal adaptive evolution of Escherichia coli were evaluated in a prolonged evolution experiment in duplicated lineages. The cells from the continuously fitness-increasing evolutionary process were subjected to genome sequencing and analyzed at both the population and single-colony levels. Although the dynamics of genome mutation fixation were complicated by the combination of the stochastic appearance of adaptive mutations and clonal interference, the mutation fixation in the population was simply linear over generations. Each genome in the population accumulated 1.6 synonymous and 3.1 non-synonymous neutral mutations, on average, by the spontaneous mutation accumulation rate, while only a single genome in the population occasionally acquired an adaptive mutation. The neutral mutations that preexisted on the single genome hitchhiked on the domination of the adaptive mutation. The successive fixation processes of the 128 mutations demonstrated that hitchhiking and not genetic drift were responsible for the coincidence of the spontaneous mutation accumulation rate in the genome with the fixation rate of neutral mutations in the population. The molecular clock of neutral mutations to the fitness-increasing evolution suggests that the numerous neutral mutations observed in molecular phylogenetic trees may not always have been fixed in fitness-steady evolution but in adaptive evolution.

  6. Testing the Dose–Response Specification in Epidemiology: Public Health and Policy Consequences for Lead

    PubMed Central

    Rothenberg, Stephen J.; Rothenberg, Jesse C.

    2005-01-01

    Statistical evaluation of the dose–response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose–response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear–linear dose response) and natural-log–transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose–response relationship. We found that a log-linear lead–IQ relationship was a significantly better fit than was a linear–linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead–IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 μg/dL to 2.0 μg/dL) was 2.2 times ($319 billion) that calculated using a linear–linear dose–response function ($149 billion). The Centers for Disease Control and Prevention action limit of 10 μg/dL for children fails to protect against most damage and economic cost attributable to lead exposure. PMID:16140626

  7. Fitting program for linear regressions according to Mahon (1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trappitsch, Reto G.

    2018-01-09

    This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.

  8. Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis

    NASA Astrophysics Data System (ADS)

    Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean

    1991-06-01

    We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.

  9. Accurate and noninvasive embryos screening during in vitro fertilization (IVF) assisted by Raman analysis of embryos culture medium Accurate and noninvasive embryos screening during IVF

    NASA Astrophysics Data System (ADS)

    Shen, A. G.; Peng, J.; Zhao, Q. H.; Su, L.; Wang, X. H.; Hu, J. M.; Yang, J.

    2012-04-01

    In combination with morphological evaluation tests, we employ Raman spectroscopy to select higher potential reproductive embryos during in vitro fertilization (IVF) based on chemical composition of embryos culture medium. In this study, 57 Raman spectra are acquired from both higher and lower quality embryos culture medium (ECM) from 10 patients which have been preliminarily confirmed by clinical assay. Data are fit by using a linear combination model of least squares method in which 12 basis spectra represent the chemical features of ECM. The final fitting coefficients provide insight into the chemical compositions of culture medium samples and are subsequently used as criterion to evaluate the quality of embryos. The relative fitting coefficients ratios of sodium pyruvate/albumin and phenylalanine/albumin seem act as key roles in the embryo screening, attaining 85.7% accuracy in comparison with clinical pregnancy. The good results demonstrate that Raman spectroscopy therefore is an important candidate for an accurate and noninvasive screening of higher quality embryos, which potentially decrease the time-consuming clinical trials during IVF.

  10. A combined averaging and frequency mixing approach for force identification in weakly nonlinear high-Q oscillators: Atomic force microscope

    NASA Astrophysics Data System (ADS)

    Sah, Si Mohamed; Forchheimer, Daniel; Borgani, Riccardo; Haviland, David

    2018-02-01

    We present a polynomial force reconstruction of the tip-sample interaction force in Atomic Force Microscopy. The method uses analytical expressions for the slow-time amplitude and phase evolution, obtained from time-averaging over the rapidly oscillating part of the cantilever dynamics. The slow-time behavior can be easily obtained in either the numerical simulations or the experiment in which a high-Q resonator is perturbed by a weak nonlinearity and a periodic driving force. A direct fit of the theoretical expressions to the simulated and experimental data gives the best-fit parameters for the force model. The method combines and complements previous works (Platz et al., 2013; Forchheimer et al., 2012 [2]) and it allows for computationally more efficient parameter mapping with AFM. Results for the simulated asymmetric piecewise linear force and VdW-DMT force models are compared with the reconstructed polynomial force and show a good agreement. It is also shown that the analytical amplitude and phase modulation equations fit well with the experimental data.

  11. Binocular combination of phase and contrast explained by a gain-control and gain-enhancement model

    PubMed Central

    Ding, Jian; Klein, Stanley A.; Levi, Dennis M.

    2013-01-01

    We investigated suprathreshold binocular combination, measuring both the perceived phase and perceived contrast of a cyclopean sine wave. We used a paradigm adapted from Ding and Sperling (2006, 2007) to measure the perceived phase by indicating the apparent location (phase) of the dark trough in the horizontal cyclopean sine wave relative to a black horizontal reference line, and we used the same stimuli to measure perceived contrast by matching the binocular combined contrast to a standard contrast presented to one eye. We found that under normal viewing conditions (high contrast and long stimulus duration), perceived contrast is constant, independent of the interocular contrast ratio and the interocular phase difference, while the perceived phase shifts smoothly from one eye to the other eye depending on the contrast ratios. However, at low contrasts and short stimulus durations, binocular combination is more linear and contrast summation is phase-dependent. To account for phase-dependent contrast summation, we incorporated a fusion remapping mechanism into our model, using disparity energy to shift the monocular phases towards the cyclopean phase in order to align the two eyes' images through motor/sensory fusion. The Ding-Sperling model with motor/sensory fusion mechanism gives a reasonable account of the phase dependence of binocular contrast combination and can account for either the perceived phase or the perceived contrast of a cyclopean sine wave separately; however it requires different model parameters for the two. However, when fit to both phase and contrast data simultaneously, the Ding-Sperling model fails. Incorporating interocular gain enhancement into the model results in a significant improvement in fitting both phase and contrast data simultaneously, successfully accounting for both linear summation at low contrast energy and strong nonlinearity at high contrast energy. PMID:23397038

  12. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration

    2014-03-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.

  13. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  14. Experimental demonstration of deep frequency modulation interferometry.

    PubMed

    Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán

    2016-01-25

    Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.

  15. Fitting the High-Resolution Spectroscopic Data for Ncncs

    NASA Astrophysics Data System (ADS)

    Kisiel, Zbigniew; Winnewisser, Brenda P.; Winnewisser, Manfred; De Lucia, Frank C.; Tokaryk, Dennis; Ross, Stephen Cary; Billinghurst, Brant E.

    2014-06-01

    NCNCS is a quasi-linear molecule that displays plentiful spectroscopic signatures of transition from the asymmetric top to the linear rotor regime. The transition takes place on successive excitation of the ν_7 bending mode at ca 80 cm-1. The unusual spectroscopic manifestations on crossing the barrier to linearity are explained by quantum monodromy and described quantitatively by the generalised semi-rigid bender Hamiltonian. Nevertheless, analysis to experimental accuracy of the extensive mm-wave spectrum of NCNCS recorded with the FASSST technique has only so far been achieved with the use of separate J(J+1) expansions for each (v_7, K_a) transition sequence.^c In addition, several selective perturbations identified between transition sequences in different vibrational levels^c are still unfitted. Presently we seek effective approximations to the vibration-rotation Hamiltonian that would allow combining multiple sequences into a fit, would allow a perturbation analysis, and could use mm-wave data together with high-resolution infrared measurements of NCNCS made at the Canadian Light Source. The understanding of effective fits to low-K_a subsets of rotational transitions in the FASSST spectrum has already allowed confident assignment of the 34S and both 13C isotopic species of NCNCS in natural abundance, as will be described. B.P.Winnewisser, et al., Phys. Rev. Lett. 95 243002 (2005). M.Winnewisser, et al., J. Mol. Struct. 798, 1 (2006). B.P.Winnewisser, et al., Phys. Chem. Chem. Phys. 12, 8158 (2010).

  16. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  17. A method for atomic-level noncontact thermometry with electron energy distribution

    NASA Astrophysics Data System (ADS)

    Kinoshita, Ikuo; Tsukada, Chiharu; Ouchi, Kohei; Kobayashi, Eiichi; Ishii, Juntaro

    2017-04-01

    We devised a new method of determining the temperatures of materials with their electron-energy distributions. The Fermi-Dirac distribution convoluted with a linear combination of Gaussian and Lorentzian distributions was fitted to the photoelectron spectrum measured for the Au(110) single-crystal surface at liquid N2-cooled temperature. The fitting successfully determined the surface-local thermodynamic temperature and the energy resolution simultaneously from the photoelectron spectrum, without any preliminary results of other measurements. The determined thermodynamic temperature was 99 ± 2.1 K, which was in good agreement with the reference temperature of 98.5 ± 0.5 K measured using a silicon diode sensor attached to the sample holder.

  18. The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees

    NASA Astrophysics Data System (ADS)

    Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard

    2017-09-01

    Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.

  19. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  20. Searching for oscillations in the primordial power spectrum. II. Constraints from Planck data

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Spergel, David N.; Wandelt, Benjamin D.

    2014-03-01

    In this second of two papers we apply our recently developed code to search for resonance features in the Planck CMB temperature data. We search both for log-spaced oscillations or linear-spaced oscillations and compare our findings with results of our WMAP9 analysis and the Planck team analysis [P. A. R. Ade et al. (Planck Collaboration>), arXiv:1303.5082]. While there are hints of log-spaced resonant features present in the WMAP9 data, the significance of these features weaken with more data. With more accurate small scale measurements, we also find that the best-fit frequency has shifted and the amplitude has been reduced. We confirm the presence of a several low frequency peaks, earlier identified by the Planck team, but with a better improvement of fit (Δχeff2˜12). We further investigate this improvement by allowing the lensing potential to vary as well, showing mild correlation between the amplitude of the oscillations and the lensing amplitude. We find that the improvement of the fit increases even more (Δχeff2˜14) for the low frequencies that modify the spectrum in a way that mimics the lensing effect. Since these features were not present in the WMAP data, they are primarily due to better measurements of Planck at small angular scales. For linear-spaced oscillations we find a maximum Δχeff2˜13 scanning two orders of magnitude in frequency space, and the biggest improvements are at extremely high frequencies. Again, we recover a best-fit frequency very close to the one found in WMAP9, which confirms that the fit improvement is driven by low ℓ. Further comparisons with WMAP9 show Planck contains many more features, both for linear- and log-spaced oscillations, but with a smaller improvement of fit. We discuss the improvement as a function of the number of modes and study the effect of the 217 GHz map, which appears to drive most of the improvement for log-spaced oscillations. Two points strongly suggest that the detected features are fitting a combination of the noise and the dip at ℓ˜1800 in the 217 GHz map: the fit improvement mostly comes from a small range of ℓ, and comparison with simulations shows that the fit improvement is consistent with a statistical fluctuation. We conclude that none of the detected features are statistically significant.

  1. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  2. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  3. Non-linear Multidimensional Optimization for use in Wire Scanner Fitting

    NASA Astrophysics Data System (ADS)

    Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration

    2013-10-01

    To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.

  4. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  5. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  6. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    PubMed

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Study of the observational compatibility of an inhomogeneous cosmology with linear expansion according to SNe Ia

    NASA Astrophysics Data System (ADS)

    Monjo, R.

    2017-11-01

    Most of current cosmological theories are built combining an isotropic and homogeneous manifold with a scale factor that depends on time. If one supposes a hyperconical universe with linear expansion, an inhomogeneous metric can be obtained by an appropriate transformation that preserves the proper time. This model locally tends to a flat Friedman-Robertson-Walker metric with linear expansion. The objective of this work is to analyze the observational compatibility of the inhomogeneous metric considered. For this purpose, the corresponding luminosity distance was obtained and was compared with the observations of 580 SNe Ia, taken from the Supernova Cosmology Project. The best fit of the hyperconical model obtains χ02=562 , the same value as the standard Λ CDM model. Finally, a possible relationship is found between both theories.

  8. Comparative analysis of linear and non-linear method of estimating the sorption isotherm parameters for malachite green onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-08-21

    The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.

  9. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  11. Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.

    PubMed

    Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul

    2015-01-01

    Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.

  12. JMOSFET: A MOSFET parameter extractor with geometry-dependent terms

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Moore, B. T.

    1985-01-01

    The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.

  13. Predicting phenotypes of asthma and eczema with machine learning

    PubMed Central

    2014-01-01

    Background There is increasing recognition that asthma and eczema are heterogeneous diseases. We investigated the predictive ability of a spectrum of machine learning methods to disambiguate clinical sub-groups of asthma, wheeze and eczema, using a large heterogeneous set of attributes in an unselected population. The aim was to identify to what extent such heterogeneous information can be combined to reveal specific clinical manifestations. Methods The study population comprised a cross-sectional sample of adults, and included representatives of the general population enriched by subjects with asthma. Linear and non-linear machine learning methods, from logistic regression to random forests, were fit on a large attribute set including demographic, clinical and laboratory features, genetic profiles and environmental exposures. Outcome of interest were asthma, wheeze and eczema encoded by different operational definitions. Model validation was performed via bootstrapping. Results The study population included 554 adults, 42% male, 38% previous or current smokers. Proportion of asthma, wheeze, and eczema diagnoses was 16.7%, 12.3%, and 21.7%, respectively. Models were fit on 223 non-genetic variables plus 215 single nucleotide polymorphisms. In general, non-linear models achieved higher sensitivity and specificity than other methods, especially for asthma and wheeze, less for eczema, with areas under receiver operating characteristic curve of 84%, 76% and 64%, respectively. Our findings confirm that allergen sensitisation and lung function characterise asthma better in combination than separately. The predictive ability of genetic markers alone is limited. For eczema, new predictors such as bio-impedance were discovered. Conclusions More usefully-complex modelling is the key to a better understanding of disease mechanisms and personalised healthcare: further advances are likely with the incorporation of more factors/attributes and longitudinal measures. PMID:25077568

  14. Parametrization of free ion levels of four isoelectronic 4f2 systems: Insights into configuration interaction parameters

    NASA Astrophysics Data System (ADS)

    Yeung, Yau Yuen; Tanner, Peter A.

    2013-12-01

    The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.

  15. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  16. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  17. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  18. Aerosol Impacts on Cirrus Clouds and High-Power Laser Transmission: A Combined Satellite Observation and Modeling Approach

    DTIC Science & Technology

    2009-03-22

    indirect effect (AIE) index determined from the slope of the fitted linear equation involving cloud particle size vs. aerosol optical depth is about a... raindrop . The model simulations were performed for a 48-hour period, starting at 00Z on 29 March 2007, about 20 hours prior to ABL test flight time...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MS. KRISTEN LUND UNIV OF CALIFORNIA LOS ANGELES, CA 90095 8. PERFORMING

  19. Pb Speciation Data to Estimate Lead Bioavailability to Quail

    EPA Pesticide Factsheets

    Linear combination fitting data for lead speciation of soil samples evaluated through an in-vivo/in-vitro correlation for quail exposure.This dataset is associated with the following publication:Beyer, W.N., N. Basta, R. Chaney, P. Henry, D. Mosby, B. Rattner, K. Scheckel , D. Sprague, and J. Weber. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL. G.A. Burton, Jr., and C. H. Ward ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 35(9): 2311-2319, (2016).

  20. Linear combination fitting data

    EPA Pesticide Factsheets

    The dataset shows the weighted percentage of arsenic speciation for untreated and treated soil samples with amendments designed to immobilize arsenic in soils.This dataset is associated with the following publication:Mele, E., E. Donner, A. Juhasz, G. Brunetti, E. Smith, A. Betts , P. Castaldi, S. Deiana, K. Scheckel , and E. Lombi. In situ fixation of metal(loid)s in contaminated soils: a comparison of conventional, by product and engineered soil amendments. David L. Sedlak ENVIRONMENTAL SCIENCE & TECHNOLOGY. American Chemical Society, Washington, DC, USA, 49: 13501-13509, (2015).

  1. Spatially resolved regression analysis of pre-treatment FDG, FLT and Cu-ATSM PET from post-treatment FDG PET: an exploratory study

    PubMed Central

    Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert

    2012-01-01

    Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748

  2. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    DOE PAGES

    Huang, Hao; Zhang, Guifu; Zhao, Kun; ...

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  3. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  4. Megaplumes on Venus

    NASA Technical Reports Server (NTRS)

    Kaula, W. M.

    1993-01-01

    The geoid and topography heights of Atla Regio and Beta Regio, both peaks and slopes, appear explicable as steady-state plumes, if non-linear viscosity eta(Tau, epsilon) is taken into account. Strongly constrained by the data are an effective plume depth of about 700 km, with a temperature anomaly thereat of about 30 degrees, leading to more than 400 degrees at the plume head. Also well constrained is the combination Q(eta)/s(sup 4)(sub 0) = (volume flow rate)(viscosity)/(plume radius): about 11 Pa/m/sec. The topographic slopes dh/ds constrain the combination Q/A, where A is the thickness of the spreading layer, since the slope varies inversely with velocity. The geoid slopes dN/ds require enhancement of the deeper flow, as expected from non-linear viscosity. The Beta data are best fit by Q = 500 m(sup 3)/sec and A equals 140 km; the Atla, by Q equals 440 m(exp 3)/sec and A equals 260 km. The dynamic contribution to the topographic slope is minor.

  5. Determination of time zero from a charged particle detector

    DOEpatents

    Green, Jesse Andrew [Los Alamos, NM

    2011-03-15

    A method, system and computer program is used to determine a linear track having a good fit to a most likely or expected path of charged particle passing through a charged particle detector having a plurality of drift cells. Hit signals from the charged particle detector are associated with a particular charged particle track. An initial estimate of time zero is made from these hit signals and linear tracks are then fit to drift radii for each particular time-zero estimate. The linear track having the best fit is then searched and selected and errors in fit and tracking parameters computed. The use of large and expensive fast detectors needed to time zero in the charged particle detectors can be avoided by adopting this method and system.

  6. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  7. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  8. Observational bounds on the cosmic radiation density

    NASA Astrophysics Data System (ADS)

    Hamann, J.; Hannestad, S.; Raffelt, G. G.; Wong, Y. Y. Y.

    2007-08-01

    We consider the inference of the cosmic radiation density, traditionally parametrized as the effective number of neutrino species Neff, from precision cosmological data. Paying particular attention to systematic effects, notably scale-dependent biasing in the galaxy power spectrum, we find no evidence for a significant deviation of Neff from the standard value of Neff0 = 3.046 in any combination of cosmological data sets, in contrast to some recent conclusions of other authors. The combination of all available data in the linear regime favours, in the context of a 'vanilla+Neff' cosmological model, 1.1

  9. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  10. Histological Grading of Hepatocellular Carcinomas with Intravoxel Incoherent Motion Diffusion-weighted Imaging: Inconsistent Results Depending on the Fitting Method.

    PubMed

    Ichikawa, Shintaro; Motosugi, Utaroh; Hernando, Diego; Morisaka, Hiroyuki; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi

    2018-04-10

    To compare the abilities of three intravoxel incoherent motion (IVIM) imaging approximation methods to discriminate the histological grade of hepatocellular carcinomas (HCCs). Fifty-eight patients (60 HCCs) underwent IVIM imaging with 11 b-values (0-1000 s/mm 2 ). Slow (D) and fast diffusion coefficients (D * ) and the perfusion fraction (f) were calculated for the HCCs using the mean signal intensities in regions of interest drawn by two radiologists. Three approximation methods were used. First, all three parameters were obtained simultaneously using non-linear fitting (method A). Second, D was obtained using linear fitting (b = 500 and 1000), followed by non-linear fitting for D * and f (method B). Third, D was obtained by linear fitting, f was obtained using the regression line intersection and signals at b = 0, and non-linear fitting was used for D * (method C). A receiver operating characteristic analysis was performed to reveal the abilities of these methods to distinguish poorly-differentiated from well-to-moderately-differentiated HCCs. Inter-reader agreements were assessed using intraclass correlation coefficients (ICCs). The measurements of D, D * , and f in methods B and C (Az-value, 0.658-0.881) had better discrimination abilities than did those in method A (Az-value, 0.527-0.607). The ICCs of D and f were good to excellent (0.639-0.835) with all methods. The ICCs of D * were moderate with methods B (0.580) and C (0.463) and good with method A (0.705). The IVIM parameters may vary depending on the fitting methods, and therefore, further technical refinement may be needed.

  11. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  12. Parametric resonance in the early Universe—a fitting analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  13. Iterative fitting method for the evaluation and quantification of PAES spectra

    NASA Astrophysics Data System (ADS)

    Zimnik, Samantha; Hackenberg, Mathias; Hugenschmidt, Christoph

    2017-01-01

    The elemental composition of surfaces is of great importance for the understanding of many surface processes such as catalysis. For a reliable analysis and a comparison of results, the quantification of the measured data is indispensable. Positron annihilation induced Auger Electron Spectroscopy (PAES) is a spectroscopic technique that measures the elemental composition with outstanding surface sensitivity, but up to now, no standardized evaluation procedure for PAES spectra is available. In this paper we present a new approach for the evaluation of PAES spectra of compounds, using the spectra obtained for the pure elements as reference. The measured spectrum is then fitted by a linear combination of the reference spectra by varying their intensities. The comparison of the results of the fitting routine with a calculation of the full parameter range shows an excellent agreement. We present the results of the new analysis method to evaluate the PAES spectra of sub-monolayers of Ni on a Pd substrate.

  14. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    PubMed

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  15. Temperature, concentration, and frequency dependence of the dielectric constant near the critical point of the binary liquid mixture nitrobenzene-tetradecane

    NASA Astrophysics Data System (ADS)

    Leys, Jan; Losada-Pérez, Patricia; Cordoyiannis, George; Cerdeiriña, Claudio A.; Glorieux, Christ; Thoen, Jan

    2010-03-01

    Detailed results are reported for the dielectric constant ɛ as a function of temperature, concentration, and frequency near the upper critical point of the binary liquid mixture nitrobenzene-tetradecane. The data have been analyzed in the context of the recently developed concept of complete scaling. It is shown that the amplitude of the low frequency critical Maxwell-Wagner relaxation (with a relaxation frequency around 10 kHz) along the critical isopleth is consistent with the predictions of a droplet model for the critical fluctuations. The temperature dependence of ɛ in the homogeneous phase can be well described with a combination of a (1-α) power law term (with α the heat capacity critical exponent) and a linear term in reduced temperature with the Ising value for α. For the proper description of the temperature dependence of the difference Δɛ between the two coexisting phases below the critical temperature, it turned out that good fits with the Ising value for the order parameter exponent β required the addition of a corrections-to-scaling contribution or a linear term in reduced temperature. Good fits to the dielectric diameter ɛd require a (1-α) power law term, a 2β power law term (in the past considered as spurious), and a linear term in reduced temperature, consistent with complete scaling.

  16. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    ERIC Educational Resources Information Center

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  17. Transonic Compressor: Program System TXCO for Data Acquisition and On-Line Reduction.

    DTIC Science & Technology

    1980-10-01

    IMONIDAYIYEARIHOUR,IMINISEC) OS16 C ............................................................... (0S17 C 0SiB C Gel dole ond line and convert the...linear curve fits SECON real intercept of linear curve fit (as from CURVE) 65 - . FLOW CHART SUBROUTINE CALIB - - - Aso C’A / oonre& *Go wSAt*irc

  18. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  19. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  20. Reducing motion artifacts in 4D MR images using principal component analysis (PCA) combined with linear polynomial fitting model

    PubMed Central

    Yang, Juan; Yin, Yong; Li, Dengwang

    2015-01-01

    We have previously developed a retrospective 4D‐MRI technique using body area as the respiratory surrogate, but generally, the reconstructed 4D MR images suffer from severe or mild artifacts mainly caused by irregular motion during image acquisition. Those image artifacts may potentially affect the accuracy of tumor target delineation or the shape representation of surrounding nontarget tissues and organs. So the purpose of this study is to propose an approach employing principal component analysis (PCA), combined with a linear polynomial fitting model, to remodel the displacement vector fields (DVFs) obtained from deformable image registration (DIR), with the main goal of reducing the motion artifacts in 4D MR images. Seven patients with hepatocellular carcinoma (2/7) or liver metastases (5/7) in the liver, as well as a patient with non‐small cell lung cancer (NSCLC), were enrolled in an IRB‐approved prospective study. Both CT and MR simulations were performed for each patient for treatment planning. Multiple‐slice, multiple‐phase, cine‐MRI images were acquired in the axial plane for 4D‐MRI reconstruction. Single‐slice 2D cine‐MR images were acquired across the center of the tumor in axial, coronal, and sagittal planes. For a 4D MR image dataset, the DVFs in three orthogonal direction (inferior–superior (SI), anterior–posterior (AP), and medial–lateral (ML)) relative to a specific reference phase were calculated using an in‐house DIR algorithm. The DVFs were preprocessed in three temporal and spatial dimensions using a polynomial fitting model, with the goal of correcting the potential registration errors introduced by three‐dimensional DIR. Then PCA was used to decompose each fitted DVF into a linear combination of three principal motion bases whose spanned subspaces combined with their projections had been validated to be sufficient to represent the regular respiratory motion. By wrapping the reference MR image using the remodeled DVFs, ‘synthetic’ MR images with reduced motion artifacts were generated at selected phase. Tumor motion trajectories derived from cine‐MRI, 4D CT, original 4D MRI, and ‘synthetic’ 4D MRI were analyzed in the SI, AP, and ML directions, respectively. Their correlation coefficient (CC) and difference (D) in motion amplitude were calculated for comparison. Of all the patients, the means and standard deviations (SDs) of CC comparing ‘synthetic’ 4D MRI and cine‐MRI were 0.98±0.01,0.98±0,01, and 0.99±0.01 in SI, AP, and ML directions, respectively. The mean±SD Ds were 0.59±0.09 mm,0.29±0.10 mm, and 0.15±0.05 mm in SI, AP and ML directions, respectively. The means and SDs of CC comparing ‘synthetic’ 4D MRI and 4D CT were 0.96±0.01,0.95±0.01, and 0.95±0.01 in SI, AP, and ML directions, respectively. The mean±SD Ds were 0.76±0.20 mm,0.33±0.14 mm, and 0.19±0.07 mm in SI, AP, and ML directions, respectively. The means and SDs of CC comparing ‘synthetic’ 4D MRI and original 4D MRI were 0.98±0.01,0.98±0.01, and 0.97±0.01 in SI, AP, and ML directions, respectively. The mean±SD Ds were 0.58±0.10 mm,0.30±0.09 mm, and 0.17±0.04 mm in SI, AP, and ML directions, respectively. In this study we have proposed an approach employing PCA combined with a linear polynomial fitting model to capture the regular respiratory motion from a 4D MR image dataset. And its potential usefulness in reducing motion artifacts and improving image quality has been demonstrated by the preliminary results in oncological patients. PACS numbers: 87.57.cp, 87.57.nj, 87.61.‐c PMID:26103185

  1. Comparison of linear and non-linear method in estimating the sorption isotherm parameters for safranin onto activated carbon.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-08-31

    Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  2. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate

    PubMed Central

    Motulsky, Harvey J; Brown, Ronald E

    2006-01-01

    Background Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Results We describe a new method for identifying outliers when fitting data with nonlinear regression. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We devised a new adaptive method that gradually becomes more robust as the method proceeds. To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We then remove the outliers, and analyze the data using ordinary least-squares regression. Because the method combines robust regression and outlier removal, we call it the ROUT method. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. Conclusion Our method, which combines a new method of robust nonlinear regression with a new method of outlier identification, identifies outliers from nonlinear curve fits with reasonable power and few false positives. PMID:16526949

  3. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  4. Quantifying heterogeneity of lesion uptake in dynamic contrast enhanced MRI for breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Karahaliou, A.; Vassiou, K.; Skiadopoulos, S.; Kanavou, T.; Yiakoumelos, A.; Costaridou, L.

    2009-07-01

    The current study investigates whether texture features extracted from lesion kinetics feature maps can be used for breast cancer diagnosis. Fifty five women with 57 breast lesions (27 benign, 30 malignant) were subjected to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) on 1.5T system. A linear-slope model was fitted pixel-wise to a representative lesion slice time series and fitted parameters were used to create three kinetic maps (wash out, time to peak enhancement and peak enhancement). 28 grey level co-occurrence matrices features were extracted from each lesion kinetic map. The ability of texture features per map in discriminating malignant from benign lesions was investigated using a Probabilistic Neural Network classifier. Additional classification was performed by combining classification outputs of most discriminating feature subsets from the three maps, via majority voting. The combined scheme outperformed classification based on individual maps achieving area under Receiver Operating Characteristics curve 0.960±0.029. Results suggest that heterogeneity of breast lesion kinetics, as quantified by texture analysis, may contribute to computer assisted tissue characterization in DCE-MRI.

  5. Field dependent magnetic anisotropy of Fe1-xZnx thin films

    NASA Astrophysics Data System (ADS)

    Resnick, Damon A.; McClure, A.; Kuster, C. M.; Rugheimer, P.; Idzerda, Y. U.

    2013-05-01

    Using longitudinal magneto-optical Kerr effect in combination with a variable strength rotating magnetic field, called the Rotational Magneto-Optic Kerr Effect (ROTMOKE) method, we show that the magnetic anisotropy for thin Fe82Zn18 single crystal films, grown on MgO(001) substrates, depends linearly on the strength of the applied magnetic field at low fields but is constant (saturates) at fields greater than 350 Oe. The torque moment curves generated using ROTMOKE are well fit with a model that accounts for the uniaxial and cubic anisotropy with the addition of a cubic anisotropy that depends linearly on the applied magnetic field. The field dependent term is evidence of a large effect on the effective magnetic anisotropy in Fe1-xZnx thin films by the magnetostriction.

  6. Inclusion of fluorophores in cyclodextrins: a closer look at the fluorometric determination of association constants by linear and nonlinear fitting procedures

    NASA Astrophysics Data System (ADS)

    Hutterer, Rudi

    2018-01-01

    The author discusses methods for the fluorometric determination of affinity constants by linear and nonlinear fitting methods. This is outlined in particular for the interaction between cyclodextrins and several anesthetic drugs including benzocaine. Special emphasis is given to the limitations of certain fits, and the impact of such studies on enzyme-substrate interactions are demonstrated. Both the experimental part and methods of analysis are well suited for students in an advanced lab.

  7. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  8. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  9. Are ethnic and gender specific equations needed to derive fat free mass from bioelectrical impedance in children of South asian, black african-Caribbean and white European origin? Results of the assessment of body composition in children study.

    PubMed

    Nightingale, Claire M; Rudnicka, Alicja R; Owen, Christopher G; Donin, Angela S; Newton, Sian L; Furness, Cheryl A; Howard, Emma L; Gillings, Rachel D; Wells, Jonathan C K; Cook, Derek G; Whincup, Peter H

    2013-01-01

    Bioelectrical impedance analysis (BIA) is a potentially valuable method for assessing lean mass and body fat levels in children from different ethnic groups. We examined the need for ethnic- and gender-specific equations for estimating fat free mass (FFM) from BIA in children from different ethnic groups and examined their effects on the assessment of ethnic differences in body fat. Cross-sectional study of children aged 8-10 years in London Primary schools including 325 South Asians, 250 black African-Caribbeans and 289 white Europeans with measurements of height, weight and arm-leg impedance (Z; Bodystat 1500). Total body water was estimated from deuterium dilution and converted to FFM. Multilevel models were used to derive three types of equation {A: FFM = linear combination(height+weight+Z); B: FFM = linear combination(height(2)/Z); C: FFM = linear combination(height(2)/Z+weight)}. Ethnicity and gender were important predictors of FFM and improved model fit in all equations. The models of best fit were ethnicity and gender specific versions of equation A, followed by equation C; these provided accurate assessments of ethnic differences in FFM and FM. In contrast, the use of generic equations led to underestimation of both the negative South Asian-white European FFM difference and the positive black African-Caribbean-white European FFM difference (by 0.53 kg and by 0.73 kg respectively for equation A). The use of generic equations underestimated the positive South Asian-white European difference in fat mass (FM) and overestimated the positive black African-Caribbean-white European difference in FM (by 4.7% and 10.1% respectively for equation A). Consistent results were observed when the equations were applied to a large external data set. Ethnic- and gender-specific equations for predicting FFM from BIA provide better estimates of ethnic differences in FFM and FM in children, while generic equations can misrepresent these ethnic differences.

  10. Are Ethnic and Gender Specific Equations Needed to Derive Fat Free Mass from Bioelectrical Impedance in Children of South Asian, Black African-Caribbean and White European Origin? Results of the Assessment of Body Composition in Children Study

    PubMed Central

    Nightingale, Claire M.; Rudnicka, Alicja R.; Owen, Christopher G.; Donin, Angela S.; Newton, Sian L.; Furness, Cheryl A.; Howard, Emma L.; Gillings, Rachel D.; Wells, Jonathan C. K.; Cook, Derek G.; Whincup, Peter H.

    2013-01-01

    Background Bioelectrical impedance analysis (BIA) is a potentially valuable method for assessing lean mass and body fat levels in children from different ethnic groups. We examined the need for ethnic- and gender-specific equations for estimating fat free mass (FFM) from BIA in children from different ethnic groups and examined their effects on the assessment of ethnic differences in body fat. Methods Cross-sectional study of children aged 8–10 years in London Primary schools including 325 South Asians, 250 black African-Caribbeans and 289 white Europeans with measurements of height, weight and arm-leg impedance (Z; Bodystat 1500). Total body water was estimated from deuterium dilution and converted to FFM. Multilevel models were used to derive three types of equation {A: FFM = linear combination(height+weight+Z); B: FFM = linear combination(height2/Z); C: FFM = linear combination(height2/Z+weight)}. Results Ethnicity and gender were important predictors of FFM and improved model fit in all equations. The models of best fit were ethnicity and gender specific versions of equation A, followed by equation C; these provided accurate assessments of ethnic differences in FFM and FM. In contrast, the use of generic equations led to underestimation of both the negative South Asian-white European FFM difference and the positive black African-Caribbean-white European FFM difference (by 0.53 kg and by 0.73 kg respectively for equation A). The use of generic equations underestimated the positive South Asian-white European difference in fat mass (FM) and overestimated the positive black African-Caribbean-white European difference in FM (by 4.7% and 10.1% respectively for equation A). Consistent results were observed when the equations were applied to a large external data set. Conclusions Ethnic- and gender-specific equations for predicting FFM from BIA provide better estimates of ethnic differences in FFM and FM in children, while generic equations can misrepresent these ethnic differences. PMID:24204625

  11. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  12. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  13. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  14. Fitting dynamic models to the Geosat sea level observations in the tropical Pacific Ocean. I - A free wave model

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire

    1991-01-01

    Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.

  15. Cardiorespiratory Fitness and Muscular Strength as Mediators of the Influence of Fatness on Academic Achievement.

    PubMed

    García-Hermoso, Antonio; Esteban-Cornejo, Irene; Olloquequi, Jordi; Ramírez-Vélez, Robinson

    2017-08-01

    To examine the combined association of fatness and physical fitness components (cardiorespiratory fitness [CRF] and muscular strength) with academic achievement, and to determine whether CRF and muscular strength are mediators of the association between fatness and academic achievement in a nationally representative sample of adolescents from Chile. Data were obtained for a sample of 36 870 adolescents (mean age, 13.8 years; 55.2% boys) from the Chilean System for the Assessment of Educational Quality test for eighth grade in 2011, 2013, and 2014. Physical fitness tests included CRF (20-m shuttle run) and muscular strength (standing long jump). Weight, height, and waist circumference were assessed, and body mass index and waist circumference-to-height ratio were calculated. Academic achievement in language and mathematics was assessed using standardized tests. The PROCESS script developed by Hayes was used for mediation analysis. Compared with unfit and high-fatness adolescents, fit and low-fatness adolescents had significantly higher odds for attaining high academic achievement in language and mathematics. However, in language, unfit and low-fatness adolescents did not have significantly higher odds for obtaining high academic achievement. Those with high fatness had higher academic achievement (both language and mathematics) if they were fit. Linear regression models suggest a partial or full mediation of physical fitness in the association of fatness variables with academic achievement. CRF and muscular strength may attenuate or even counteract the adverse influence of fatness on academic achievement in adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  17. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  18. The identification of high potential archers based on fitness and motor ability variables: A Support Vector Machine approach.

    PubMed

    Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali

    2018-02-01

    Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Individual differences in long-range time representation.

    PubMed

    Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi

    2017-04-01

    On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.

  20. Modeling workplace bullying using catastrophe theory.

    PubMed

    Escartin, J; Ceja, L; Navarro, J; Zapf, D

    2013-10-01

    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  1. Reference spectra of important adsorbed organic and inorganic phosphate binding forms for soil P speciation using synchrotron-based K-edge XANES spectroscopy.

    PubMed

    Prietzel, Jörg; Harrington, Gertraud; Häusler, Werner; Heister, Katja; Werner, Florian; Klysubun, Wantana

    2016-03-01

    Direct speciation of soil phosphorus (P) by linear combination fitting (LCF) of P K-edge XANES spectra requires a standard set of spectra representing all major P species supposed to be present in the investigated soil. Here, available spectra of free- and cation-bound inositol hexakisphosphate (IHP), representing organic P, and of Fe, Al and Ca phosphate minerals are supplemented with spectra of adsorbed P binding forms. First, various soil constituents assumed to be potentially relevant for P sorption were compared with respect to their retention efficiency for orthophosphate and IHP at P levels typical for soils. Then, P K-edge XANES spectra for orthophosphate and IHP retained by the most relevant constituents were acquired. The spectra were compared with each other as well as with spectra of Ca, Al or Fe orthophosphate and IHP precipitates. Orthophosphate and IHP were retained particularly efficiently by ferrihydrite, boehmite, Al-saturated montmorillonite and Al-saturated soil organic matter (SOM), but far less efficiently by hematite, Ca-saturated montmorillonite and Ca-saturated SOM. P retention by dolomite was negligible. Calcite retained a large portion of the applied IHP, but no orthophosphate. The respective P K-edge XANES spectra of orthophosphate and IHP adsorbed to ferrihydrite, boehmite, Al-saturated montmorillonite and Al-saturated SOM differ from each other. They also are different from the spectra of amorphous FePO4, amorphous or crystalline AlPO4, Ca phosphates and free IHP. Inclusion of reference spectra of orthophosphate as well as IHP adsorbed to P-retaining soil minerals in addition to spectra of free or cation-bound IHP, AlPO4, FePO4 and Ca phosphate minerals in linear combination fitting exercises results in improved fit quality and a more realistic soil P speciation. A standard set of P K-edge XANES spectra of the most relevant adsorbed P binding forms in soils is presented.

  2. Photometric Follow-up Transit (Primary Eclipse) Observations of WASP-43 b and TrES-3b and A Study on Their Transit Timing Variations

    NASA Astrophysics Data System (ADS)

    Zhao, Sun; Jiang-hui, Ji; Yao, Dong

    2018-01-01

    Two photometric follow-up transit (primary eclipse) observations on WASP-43 b and four observations on TrES-3 b are performed using the Xuyi Near-Earth Object Survey Telescope. After differential photometry and light curve analysis, the physical parameters of the two systems are obtained and are in good match with the literature. Combining with transit data from a lot of literature, the residuals (O - C) of transit observations of both systems are fitted with the linear and quadratic functions. With the linear fitting, the periods and transit timing variations (TTVs) of the planets are obtained, and no obvious periodic TTV signal is found in both systems after an analysis. The maximum mass of a perturbing planet located at the 1:2 mean motion resonance (MMR) for WASP-43 b and TrES-3 b is estimated to be 1.826 and 1.504 Earth mass, respectively. By quadratic fitting, it is confirmed that WASP-43 b may have a long-term TTV which means an orbital decay. The decay rate is shown to be Ṗ = (-0.005248 ± 0.001714) s·yr-1, and compared with the previous results. Based on this, the lower limit of the stellar tidal quality parameter of WASP-43 is calculated to be Q‧* ≥ 1.5 × 105 , and the remaining lifetimes of the planets are presented for the different Q‧* values of the two systems, correspondingly.

  3. Landsat test of diffuse reflectance models for aquatic suspended solids measurement

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Alfoldi, T. T.

    1979-01-01

    Landsat radiance data were used to test mathematical models relating diffuse reflectance to aquatic suspended solids concentration. Digital CCT data for Landsat passes over the Bay of Fundy, Nova Scotia were analyzed on a General Electric Co. Image 100 multispectral analysis system. Three data sets were studied separately and together in all combinations with and without solar angle correction. Statistical analysis and chromaticity analysis show that a nonlinear relationship between Landsat radiance and suspended solids concentration is better at curve-fitting than a linear relationship. In particular, the quasi-single-scattering diffuse reflectance model developed by Gordon and coworkers is corroborated. The Gordon model applied to 33 points of MSS 5 data combined from three dates produced r = 0.98.

  4. [Equilibrium sorption isotherm for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum].

    PubMed

    Yan, Chang-zhou; Zeng, A-yan; Jin, Xiang-can; Wang, Sheng-rui; Xu, Qiu-jin; Zhao, Jing-zhu

    2006-06-01

    Equilibrium sorption isotherms for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum were studied. Both methods of linear and non-linear fitting were applied to describe the sorption isotherms, and their applicability were analyzed and compared. The results were: (1) The applicability of simulated equation can't be compared only by R2 and chi2 when equilibrium sorption model was used to quantify and contrast the performance of different biosorbents. Both methods of linear and non-linear fitting can be applied in different fitting equations to describe the equilibrium sorption isotherms respectively in order to obtain the actual and credible fitting results, and the fitting equation best accorded with experimental data can be selected; (2) In this experiment, the Langmuir model is more suitable to describe the sorption isotherm of Cu2+ biosorption by H. verticillata and M. spicatum, and there is greater difference between the experimental data and the calculated value of Freundlich model, especially for the linear form of Freundlich model; (3) The content of crude cellulose in dry matter is one of the main factor affecting the biosorption capacity of a submerged aquatic plant, and -OH and -CONH2 groups of polysaccharides on cell wall maybe are active center of biosorption; (4) According to the coefficients qm of the linear form of Langmuir model, the maximum sorption capacity of Cu2+ was found to be 21.55 mg/g and 10.80mg/g for H. verticillata and M. spicatum, respectively. The maximum specific surface area for H. verticillata for binding Cu2+ was 3.23m2/g, and it was 1.62m2/g for M. spicatum.

  5. Did ASAS-SN Kill the Supermassive Black Hole Binary Candidate PG1302-102?

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Gezari, Suvi; Miller, M. Coleman

    2018-05-01

    Graham et al. reported a periodically varying quasar and supermassive black hole binary candidate, PG1302-102 (hereafter PG1302), which was discovered in the Catalina Real-time Transient Survey (CRTS). Its combined Lincoln Near-Earth Asteroid Research (LINEAR) and CRTS optical light curve is well fitted to a sinusoid of an observed period of ≈1884 days and well modeled by the relativistic Doppler boosting of the secondary mini-disk. However, the LINEAR+CRTS light curve from MJD ≈52,700 to MJD ≈56,400 covers only ∼2 cycles of periodic variation, which is a short baseline that can be highly susceptible to normal, stochastic quasar variability. In this Letter, we present a reanalysis of PG1302 using the latest light curve from the All-sky Automated Survey for Supernovae (ASAS-SN), which extends the observational baseline to the present day (MJD ≈58,200), and adopting a maximum likelihood method that searches for a periodic component in addition to stochastic quasar variability. When the ASAS-SN data are combined with the previous LINEAR+CRTS data, the evidence for periodicity decreases. For genuine periodicity one would expect that additional data would strengthen the evidence, so the decrease in significance may be an indication that the binary model is disfavored.

  6. HOLEGAGE 1.0 - Strain-Gauge Drilling Analysis Program

    NASA Technical Reports Server (NTRS)

    Hampton, Roy V.

    1992-01-01

    Interior stresses inferred from changes in surface strains as hole is drilled. Computes stresses using strain data from each drilled-hole depth layer. Planar stresses computed in three ways: least-squares fit for linear variation with depth, integral method to give incremental stress data for each layer, and/or linear fit to integral data. Written in FORTRAN 77.

  7. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  8. Combining Mixture Components for Clustering*

    PubMed Central

    Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël

    2010-01-01

    Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302

  9. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    PubMed Central

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  10. The effect of combining two echo times in automatic brain tumor classification by MRS.

    PubMed

    García-Gómez, Juan M; Tortajada, Salvador; Vidal, César; Julià-Sapé, Margarida; Luts, Jan; Moreno-Torres, Angel; Van Huffel, Sabine; Arús, Carles; Robles, Montserrat

    2008-11-01

    (1)H MRS is becoming an accurate, non-invasive technique for initial examination of brain masses. We investigated if the combination of single-voxel (1)H MRS at 1.5 T at two different (TEs), short TE (PRESS or STEAM, 20-32 ms) and long TE (PRESS, 135-136 ms), improves the classification of brain tumors over using only one echo TE. A clinically validated dataset of 50 low-grade meningiomas, 105 aggressive tumors (glioblastoma and metastasis), and 30 low-grade glial tumors (astrocytomas grade II, oligodendrogliomas and oligoastrocytomas) was used to fit predictive models based on the combination of features from short-TEs and long-TE spectra. A new approach that combines the two consecutively was used to produce a single data vector from which relevant features of the two TE spectra could be extracted by means of three algorithms: stepwise, reliefF, and principal components analysis. Least squares support vector machines and linear discriminant analysis were applied to fit the pairwise and multiclass classifiers, respectively. Significant differences in performance were found when short-TE, long-TE or both spectra combined were used as input. In our dataset, to discriminate meningiomas, the combination of the two TE acquisitions produced optimal performance. To discriminate aggressive tumors from low-grade glial tumours, the use of short-TE acquisition alone was preferable. The classifier development strategy used here lends itself to automated learning and test performance processes, which may be of use for future web-based multicentric classifier development studies. Copyright (c) 2008 John Wiley & Sons, Ltd.

  11. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  12. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  13. Meta-regression analysis of the effect of trans fatty acids on low-density lipoprotein cholesterol.

    PubMed

    Allen, Bruce C; Vincent, Melissa J; Liska, DeAnn; Haber, Lynne T

    2016-12-01

    We conducted a meta-regression of controlled clinical trial data to investigate quantitatively the relationship between dietary intake of industrial trans fatty acids (iTFA) and increased low-density lipoprotein cholesterol (LDL-C). Previous regression analyses included insufficient data to determine the nature of the dose response in the low-dose region and have nonetheless assumed a linear relationship between iTFA intake and LDL-C levels. This work contributes to the previous work by 1) including additional studies examining low-dose intake (identified using an evidence mapping procedure); 2) investigating a range of curve shapes, including both linear and nonlinear models; and 3) using Bayesian meta-regression to combine results across trials. We found that, contrary to previous assumptions, the linear model does not acceptably fit the data, while the nonlinear, S-shaped Hill model fits the data well. Based on a conservative estimate of the degree of intra-individual variability in LDL-C (0.1 mmoL/L), as an estimate of a change in LDL-C that is not adverse, a change in iTFA intake of 2.2% of energy intake (%en) (corresponding to a total iTFA intake of 2.2-2.9%en) does not cause adverse effects on LDL-C. The iTFA intake associated with this change in LDL-C is substantially higher than the average iTFA intake (0.5%en). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Variation in the post-mating fitness landscape in fruit flies.

    PubMed

    Fricke, C; Chapman, T

    2017-07-01

    Sperm competition is pervasive and fundamental to determining a male's overall fitness. Sperm traits and seminal fluid proteins (Sfps) are key factors. However, studies of sperm competition may often exclude females that fail to remate during a defined period. Hence, the resulting data sets contain fewer data from the potentially fittest males that have most success in preventing female remating. It is also important to consider a male's reproductive success before entering sperm competition, which is a major contributor to fitness. The exclusion of these data can both hinder our understanding of the complete fitness landscapes of competing males and lessen our ability to assess the contribution of different determinants of reproductive success to male fitness. We addressed this here, using the Drosophila melanogaster model system, by (i) capturing a comprehensive range of intermating intervals that define the fitness of interacting wild-type males and (ii) analysing outcomes of sperm competition using selection analyses. We conducted additional tests using males lacking the sex peptide (SP) ejaculate component vs. genetically matched (SP + ) controls. This allowed us to assess the comprehensive fitness effects of this important Sfp on sperm competition. The results showed a signature of positive, linear selection in wild-type and SP + control males on the length of the intermating interval and on male sperm competition defence. However, the fitness surface for males lacking SP was distinct, with local fitness peaks depending on contrasting combinations of remating intervals and offspring numbers. The results suggest that there are alternative routes to success in sperm competition and provide an explanation for the maintenance of variation in sperm competition traits. © 2017 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.

  15. Regression-Based Norms for a Bi-factor Model for Scoring the Brief Test of Adult Cognition by Telephone (BTACT).

    PubMed

    Gurnani, Ashita S; John, Samantha E; Gavett, Brandon E

    2015-05-01

    The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Silicon Drift Detector response function for PIXE spectra fitting

    NASA Astrophysics Data System (ADS)

    Calzolai, G.; Tapinassi, S.; Chiari, M.; Giannoni, M.; Nava, S.; Pazzi, G.; Lucarelli, F.

    2018-02-01

    The correct determination of the X-ray peak areas in PIXE spectra by fitting with a computer program depends crucially on accurate parameterization of the detector peak response function. In the Guelph PIXE software package, GUPIXWin, one of the most used PIXE spectra analysis code, the response of a semiconductor detector to monochromatic X-ray radiation is described by a linear combination of several analytical functions: a Gaussian profile for the X-ray line itself, and additional tail contributions (exponential tails and step functions) on the low-energy side of the X-ray line to describe incomplete charge collection effects. The literature on the spectral response of silicon X-ray detectors for PIXE applications is rather scarce, in particular data for Silicon Drift Detectors (SDD) and for a large range of X-ray energies are missing. Using a set of analytical functions, the SDD response functions were satisfactorily reproduced for the X-ray energy range 1-15 keV. The behaviour of the parameters involved in the SDD tailing functions with X-ray energy is described by simple polynomial functions, which permit an easy implementation in PIXE spectra fitting codes.

  17. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  18. A dual estimate method for aeromagnetic compensation

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Zhou, Zhijian; Cheng, Defu

    2017-11-01

    Scalar aeromagnetic surveys have played a vital role in prospecting. However, before analysis of the surveys’ aeromagnetic data is possible, the aircraft’s magnetic interference should be removed. The extensively adopted linear model for aeromagnetic compensation is computationally efficient but faces an underfitting problem. On the other hand, the neural model proposed by Williams is more powerful at fitting but always suffers from an overfitting problem. This paper starts off with an analysis of these two models and then proposes a dual estimate method to combine them together to improve accuracy. This method is based on an unscented Kalman filter, but a gradient descent method is implemented over the iteration so that the parameters of the linear model are adjustable during flight. The noise caused by the neural model’s overfitting problem is suppressed by introducing an observation noise.

  19. VRF ("Visual RobFit") — nuclear spectral analysis with non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Lasche, George; Coldwell, Robert; Metzger, Robert

    2017-09-01

    A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.

  20. Under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear?

    PubMed

    Ye, Jian-Sheng; Pei, Jiu-Ying; Fang, Chao

    2018-03-01

    Understanding under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear is useful for accurately predicting the response of ecosystem function to global environmental change. Using long-term (2000-2016) net primary productivity (NPP)-precipitation datasets derived from satellite observations, we identify >5600pixels in the North Hemisphere landmass that fit either linear or nonlinear temporal NPP-precipitation relationships. Differences in climate (precipitation, radiation, ratio of actual to potential evapotranspiration, temperature) and soil factors (nitrogen, phosphorous, organic carbon, field capacity) between the linear and nonlinear types are evaluated. Our analysis shows that both linear and nonlinear types exhibit similar interannual precipitation variabilities and occurrences of extreme precipitation. Permutational multivariate analysis of variance suggests that linear and nonlinear types differ significantly regarding to radiation, ratio of actual to potential evapotranspiration, and soil factors. The nonlinear type possesses lower radiation and/or less soil nutrients than the linear type, thereby suggesting that nonlinear type features higher degree of limitation from resources other than precipitation. This study suggests several factors limiting the responses of plant productivity to changes in precipitation, thus causing nonlinear NPP-precipitation pattern. Precipitation manipulation and modeling experiments should combine with changes in other climate and soil factors to better predict the response of plant productivity under future climate. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Spatio-temporal modelling of rainfall in the Murray-Darling Basin

    NASA Astrophysics Data System (ADS)

    Nowak, Gen; Welsh, A. H.; O'Neill, T. J.; Feng, Lingbing

    2018-02-01

    The Murray-Darling Basin (MDB) is a large geographical region in southeastern Australia that contains many rivers and creeks, including Australia's three longest rivers, the Murray, the Murrumbidgee and the Darling. Understanding rainfall patterns in the MDB is very important due to the significant impact major events such as droughts and floods have on agricultural and resource productivity. We propose a model for modelling a set of monthly rainfall data obtained from stations in the MDB and for producing predictions in both the spatial and temporal dimensions. The model is a hierarchical spatio-temporal model fitted to geographical data that utilises both deterministic and data-derived components. Specifically, rainfall data at a given location are modelled as a linear combination of these deterministic and data-derived components. A key advantage of the model is that it is fitted in a step-by-step fashion, enabling appropriate empirical choices to be made at each step.

  2. Inversion for the driving forces of plate tectonics

    NASA Technical Reports Server (NTRS)

    Richardson, R. M.

    1983-01-01

    Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.

  3. Temperature Effects and Compensation-Control Methods

    PubMed Central

    Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng

    2009-01-01

    In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509

  4. Fast, scalable prediction of deleterious noncoding variants from functional and population genomic data.

    PubMed

    Huang, Yi-Fei; Gulko, Brad; Siepel, Adam

    2017-04-01

    Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.

  5. Elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation.

    PubMed

    Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen

    2016-11-01

    Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.

  6. Chromatic summation and receptive field properties of blue-on and blue-off cells in marmoset lateral geniculate nucleus.

    PubMed

    Eiber, C D; Pietersen, A N J; Zeater, N; Solomon, S G; Martin, P R

    2017-11-22

    The "blue-on" and "blue-off" receptive fields in retina and dorsal lateral geniculate nucleus (LGN) of diurnal primates combine signals from short-wavelength sensitive (S) cone photoreceptors with signals from medium/long wavelength sensitive (ML) photoreceptors. Three questions about this combination remain unresolved. Firstly, is the combination of S and ML signals in these cells linear or non-linear? Secondly, how does the timing of S and ML inputs to these cells influence their responses? Thirdly, is there spatial antagonism within S and ML subunits of the receptive field of these cells? We measured contrast sensitivity and spatial frequency tuning for four types of drifting sine gratings: S cone isolating, ML cone isolating, achromatic (S + ML), and counterphase chromatic (S - ML), in extracellular recordings from LGN of marmoset monkeys. We found that responses to stimuli which modulate both S and ML cones are well predicted by a linear sum of S and ML signals, followed by a saturating contrast-response relation. Differences in sensitivity and timing (i.e. vector combination) between S and ML inputs are needed to explain the amplitude and phase of responses to achromatic (S + ML) and counterphase chromatic (S - ML) stimuli. Best-fit spatial receptive fields for S and/or ML subunits in most cells (>80%) required antagonistic surrounds, usually in the S subunit. The surrounds were however generally weak and had little influence on spatial tuning. The sensitivity and size of S and ML subunits were correlated on a cell-by-cell basis, adding to evidence that blue-on and blue-off receptive fields are specialised to signal chromatic but not spatial contrast. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Understanding the relationship between duration of untreated psychosis and outcomes: A statistical perspective.

    PubMed

    Hannigan, Ailish; Bargary, Norma; Kinsella, Anthony; Clarke, Mary

    2017-06-14

    Although the relationships between duration of untreated psychosis (DUP) and outcomes are often assumed to be linear, few studies have explored the functional form of these relationships. The aim of this study is to demonstrate the potential of recent advances in curve fitting approaches (splines) to explore the form of the relationship between DUP and global assessment of functioning (GAF). Curve fitting approaches were used in models to predict change in GAF at long-term follow-up using DUP for a sample of 83 individuals with schizophrenia. The form of the relationship between DUP and GAF was non-linear. Accounting for non-linearity increased the percentage of variance in GAF explained by the model, resulting in better prediction and understanding of the relationship. The relationship between DUP and outcomes may be complex and model fit may be improved by accounting for the form of the relationship. This should be routinely assessed and new statistical approaches for non-linear relationships exploited, if appropriate. © 2017 John Wiley & Sons Australia, Ltd.

  8. Multipath calibration in GPS pseudorange measurements

    NASA Technical Reports Server (NTRS)

    Kee, Changdon (Inventor); Parkinson, Bradford W. (Inventor)

    1998-01-01

    Novel techniques are disclosed for eliminating multipath errors, including mean bias errors, in pseudorange measurements made by conventional global positioning system receivers. By correlating the multipath signals of different satellites at their cross-over points in the sky, multipath mean bias errors are effectively eliminated. By then taking advantage of the geometrical dependence of multipath, a linear combination of spherical harmonics are fit to the satellite multipath data to create a hemispherical model of the multipath. This calibration model can then be used to compensate for multipath in subsequent measurements and thereby obtain GPS positioning to centimeter accuracy.

  9. Improved absorption cross-sections of oxygen in the wavelength region 205-240 nm of the Herzberg continuum

    NASA Technical Reports Server (NTRS)

    Yoshino, K.; Cheung, A. S.-C.; Esmond, J. R.; Parkinson, W. H.; Freeman, D. E.

    1988-01-01

    The laboratory values of the Herzberg continuum absorption cross-section of oxygen at room temperature from Cheung et al. (1986) and Jenouvrier et al. (1986) are compared and analyzed. It is found that there is no discrepancy between the absolute values of these two sets of independent measurements. The values are combined in a linear least-squares fit to obtain improved values of the Herzberg continuum cross-section of oxygen at room temperature throughout the wavelength region 205-240 nm. The results are compared with in situ and other laboratory measurements.

  10. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  11. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  12. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  13. Design Optimization and Fabrication of a Novel Structural SOI Piezoresistive Pressure Sensor with High Accuracy

    PubMed Central

    Li, Chuang; Cordovilla, Francisco; Jagdheesh, R.

    2018-01-01

    This paper presents a novel structural piezoresistive pressure sensor with four-grooved membrane combined with rood beam to measure low pressure. In this investigation, the design, optimization, fabrication, and measurements of the sensor are involved. By analyzing the stress distribution and deflection of sensitive elements using finite element method, a novel structure featuring high concentrated stress profile (HCSP) and locally stiffened membrane (LSM) is built. Curve fittings of the mechanical stress and deflection based on FEM simulation results are performed to establish the relationship between mechanical performance and structure dimension. A combination of FEM and curve fitting method is carried out to determine the structural dimensions. The optimized sensor chip is fabricated on a SOI wafer by traditional MEMS bulk-micromachining and anodic bonding technology. When the applied pressure is 1 psi, the sensor achieves a sensitivity of 30.9 mV/V/psi, a pressure nonlinearity of 0.21% FSS and an accuracy of 0.30%, and thereby the contradiction between sensitivity and linearity is alleviated. In terms of size, accuracy and high temperature characteristic, the proposed sensor is a proper choice for measuring pressure of less than 1 psi. PMID:29393916

  14. Investigating and Modelling Effects of Climatically and Hydrologically Indicators on the Urmia Lake Coastline Changes Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Ahmadijamal, M.; Hasanlou, M.

    2017-09-01

    Study of hydrological parameters of lakes and examine the variation of water level to operate management on water resources are important. The purpose of this study is to investigate and model the Urmia Lake water level changes due to changes in climatically and hydrological indicators that affects in the process of level variation and area of this lake. For this purpose, Landsat satellite images, hydrological data, the daily precipitation, the daily surface evaporation and the daily discharge in total of the lake basin during the period of 2010-2016 have been used. Based on time-series analysis that is conducted on individual data independently with same procedure, to model variation of Urmia Lake level, we used polynomial regression technique and combined polynomial with periodic behavior. In the first scenario, we fit a multivariate linear polynomial to our datasets and determining RMSE, NRSME and R² value. We found that fourth degree polynomial can better fit to our datasets with lowest RMSE value about 9 cm. In the second scenario, we combine polynomial with periodic behavior for modeling. The second scenario has superiority comparing to the first one, by RMSE value about 3 cm.

  15. Hybrid methodology for tuberculosis incidence time-series forecasting based on ARIMA and a NAR neural network.

    PubMed

    Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C

    2017-04-01

    Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.

  16. Rapid bedrock uplift in the Antarctic Peninsula explained by viscoelastic response to recent ice unloading

    NASA Astrophysics Data System (ADS)

    Nield, Grace A.; Barletta, Valentina R.; Bordoni, Andrea; King, Matt A.; Whitehouse, Pippa L.; Clarke, Peter J.; Domack, Eugene; Scambos, Ted A.; Berthier, Etienne

    2014-07-01

    Since 1995 several ice shelves in the Northern Antarctic Peninsula have collapsed and triggered ice-mass unloading, invoking a solid Earth response that has been recorded at continuous GPS (cGPS) stations. A previous attempt to model the observation of rapid uplift following the 2002 breakup of Larsen B Ice Shelf was limited by incomplete knowledge of the pattern of ice unloading and possibly the assumption of an elastic-only mechanism. We make use of a new high resolution dataset of ice elevation change that captures ice-mass loss north of 66°S to first show that non-linear uplift of the Palmer cGPS station since 2002 cannot be explained by elastic deformation alone. We apply a viscoelastic model with linear Maxwell rheology to predict uplift since 1995 and test the fit to the Palmer cGPS time series, finding a well constrained upper mantle viscosity but less sensitivity to lithospheric thickness. We further constrain the best fitting Earth model by including six cGPS stations deployed after 2009 (the LARISSA network), with vertical velocities in the range 1.7 to 14.9 mm/yr. This results in a best fitting Earth model with lithospheric thickness of 100-140 km and upper mantle viscosity of 6×1017-2×1018 Pa s - much lower than previously suggested for this region. Combining the LARISSA time series with the Palmer cGPS time series offers a rare opportunity to study the time-evolution of the low-viscosity solid Earth response to a well-captured ice unloading event.

  17. SU-F-BRD-16: Relative Biological Effectiveness of Double-Strand Break Induction for Modeling Cell Survival in Pristine Proton Beams of Different Dose-Averaged Linear Energy Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX

    2015-06-15

    Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less

  18. Statistical physics of topological emulsions and expanding populations

    NASA Astrophysics Data System (ADS)

    Korolev, Kirill Sergeevich

    This thesis studies how microscopic interactions lead to large scale phenomena in two very different systems: two-dimensional liquid crystals and expanding populations. First, we explore the interactions among circular droplets embedded in a two-dimensional liquid crystal. The interactions arise due to anchoring boundary conditions on the surface of the inclusions and the elastic deformations of the orientational order parameter in the continuous phase. We analytically compute the texture around a single droplet and the far-field droplet-droplet pair potential. The near-field pair potential is computed numerically. We find that droplets attract at long separations and repel at short separations, which results in a well-defined preferred distance between the droplets and stabilization of the emulsion. Self-organization, barriers to coalescence, and the effects of thermal fluctuations are also discussed. Second, we study the role of randomness in the number of offspring on the evolutionary dynamics of expanding populations. Several equally fit genetic variants (alleles) are considered. We find that spatial expansion combined with demographic fluctuations leads to a substantial loss of genetic diversity and spatial segregation of the alleles. The effects of these processes on recurring mutations and selective sweeps are studied as well. Third, the competition between two alleles of different fitness is investigated. We find that the essential features of this competition can be captured by a non-linear reaction-diffusion equation. During a range expansion the fitter allele forms growing sectors that eventually engulf the less fit allele. The applications to measuring relative fitness in microbiological experiments are discussed. Finally, we analyze how a combination of strong stochasticity and weak competition affects the spreading of beneficial mutations in stationary, non-expanding, populations.

  19. Feasibility of Using Linearly Polarized Rotating Birdcage Transmitters and Close-Fitting Receive Arrays in MRI to Reduce SAR in the Vicinity of Deep Brain Simulation Implants

    PubMed Central

    Golestanirad, Laleh; Keil, Boris; Angelone, Leonardo M.; Bonmassar, Giorgio; Mareyam, Azma; Wald, Lawrence L.

    2016-01-01

    Purpose MRI of patients with deep brain stimulation (DBS) implants is strictly limited due to safety concerns, including high levels of local specific absorption rate (SAR) of radiofrequency (RF) fields near the implant and related RF-induced heating. This study demonstrates the feasibility of using a rotating linearly polarized birdcage transmitter and a 32-channel close-fit receive array to significantly reduce local SAR in MRI of DBS patients. Methods Electromagnetic simulations and phantom experiments were performed with generic DBS lead geometries and implantation paths. The technique was based on mechanically rotating a linear birdcage transmitter to align its zero electric-field region with the implant while using a close-fit receive array to significantly increase signal to noise ratio of the images. Results It was found that the zero electric-field region of the transmitter is thick enough at 1.5 Tesla to encompass DBS lead trajectories with wire segments that were up to 30 degrees out of plane, as well as leads with looped segments. Moreover, SAR reduction was not sensitive to tissue properties, and insertion of a close-fit 32-channel receive array did not degrade the SAR reduction performance. Conclusion The ensemble of rotating linear birdcage and 32-channel close-fit receive array introduces a promising technology for future improvement of imaging in patients with DBS implants. PMID:27059266

  20. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  1. Standardized ileal digestible valine:lysine dose response effects in 25- to 45-kg pigs under commercial conditions.

    PubMed

    Gonçalves, Marcio A D; Tokach, Mike D; Dritz, Steve S; Bello, Nora M; Touchette, Kevin J; Goodband, Robert D; DeRouchey, Joel M; Woodworth, Jason C

    2018-03-06

    Two experiments were conducted to estimate the standardized ileal digestible valine:lysine (SID Val:Lys) dose response effects in 25- to 45-kg pigs under commercial conditions. In experiment 1, a total of 1,134 gilts (PIC 337 × 1050), initially 31.2 kg ± 2.0 kg body weight (BW; mean ± SD) were used in a 19-d growth trial with 27 pigs per pen and seven pens per treatment. In experiment 2, a total of 2,100 gilts (PIC 327 × 1050), initially 25.4 ± 1.9 kg BW were used in a 22-d growth trial with 25 pigs per pen and 12 pens per treatment. Treatments were blocked by initial BW in a randomized complete block design. In experiment 1, there were a total of six dietary treatments with SID Val at 59.0, 62.5, 65.9, 69.6, 73.0, and 75.5% of Lys and for experiment 2 there were a total of seven dietary treatments with SID Val at 57.0, 60.6, 63.9, 67.5, 71.1, 74.4, and 78.0% of Lys. Experimental diets were formulated to ensure that Lys was the second limiting amino acid throughout the experiments. Initially, linear mixed models were fitted to data from each experiment. Then, data from the two experiments were combined to estimate dose-responses using a broken-line linear ascending (BLL) model, broken-line quadratic ascending (BLQ) model, or quadratic polynomial (QP). Model fit was compared using Bayesian information criterion (BIC). In experiment 1, ADG increased linearly (P = 0.009) with increasing SID Val:Lys with no apparent significant impact on G:F. In experiment 2, ADG and ADFI increased in a quadratic manner (P < 0.002) with increasing SID Val:Lys whereas G:F increased linearly (P < 0.001). Overall, the best-fitting model for ADG was a QP, whereby the maximum mean ADG was estimated at a 73.0% (95% CI: [69.5, >78.0%]) SID Val:Lys. For G:F, the overall best-fitting model was a QP with maximum estimated mean G:F at 69.0% (95% CI: [64.0, >78.0]) SID Val:Lys ratio. However, 99% of the maximum mean performance for ADG and G:F were achieved at, 68% and 63% SID Val:Lys ratio, respectively. Therefore, the SID Val:Lys requirement ranged from73.0% for maximum ADG to 63.2% SID Val:Lys to achieve 99% of maximum G:F in 25- to 45-kg BW pigs.

  2. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  3. An approximation of herd effect due to vaccinating children against seasonal influenza - a potential solution to the incorporation of indirect effects into static models.

    PubMed

    Van Vlaenderen, Ilse; Van Bellinghen, Laure-Anne; Meier, Genevieve; Nautrup, Barbara Poulsen

    2013-01-22

    Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses.

  4. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  5. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Field dependent magnetic anisotropy of Ga0.2Fe0.8 thin films

    NASA Astrophysics Data System (ADS)

    Resnick, Damon A.; McClure, A.; Kuster, C. M.; Rugheimer, P.; Idzerda, Y. U.

    2011-04-01

    Using longitudinal MOKE in combination with a variable strength rotating magnetic field, called the rotational MOKE (ROTMOKE) method, we show that the magnetic anisotropy for a Ga0.2Fe0.8 single crystal film with a thickness of 17 nm, grown on GaAs (001) with a thick ZnSe buffer layer, depends linearly on the strength of the applied magnetic field. The torque moment curves generated using ROTMOKE are well fit with a model that accounts for the uniaxial, cubic, or fourfold anisotropy, as well as additional terms with a linear dependence on the applied magnetic field. The uniaxial and cubic anisotropy fields, taken from both the hard and the easy axis scans, are seen to remain field independent. The field dependent terms are evidence of a large affect of the magnetostriction and its contribution to the effective magnetic anisotropy in GaxFe1-x thin films.

  7. A systems approach to model the relationship between aflatoxin gene cluster expression, environmental factors, growth and toxin production by Aspergillus flavus

    PubMed Central

    Abdel-Hadi, Ahmed; Schmidt-Heydt, Markus; Parra, Roberto; Geisen, Rolf; Magan, Naresh

    2012-01-01

    A microarray analysis was used to examine the effect of combinations of water activity (aw, 0.995–0.90) and temperature (20–42°C) on the activation of aflatoxin biosynthetic genes (30 genes) in Aspergillus flavus grown on a conducive YES (20 g yeast extract, 150 g sucrose, 1 g MgSO4·7H2O) medium. The relative expression of 10 key genes (aflF, aflD, aflE, aflM, aflO, aflP, aflQ, aflX, aflR and aflS) in the biosynthetic pathway was examined in relation to different environmental factors and phenotypic aflatoxin B1 (AFB1) production. These data, plus data on relative growth rates and AFB1 production under different aw × temperature conditions were used to develop a mixed-growth-associated product formation model. The gene expression data were normalized and then used as a linear combination of the data for all 10 genes and combined with the physical model. This was used to relate gene expression to aw and temperature conditions to predict AFB1 production. The relationship between the observed AFB1 production provided a good linear regression fit to the predicted production based in the model. The model was then validated by examining datasets outside the model fitting conditions used (37°C, 40°C and different aw levels). The relationship between structural genes (aflD, aflM) in the biosynthetic pathway and the regulatory genes (aflS, aflJ) was examined in relation to aw and temperature by developing ternary diagrams of relative expression. These findings are important in developing a more integrated systems approach by combining gene expression, ecophysiological influences and growth data to predict mycotoxin production. This could help in developing a more targeted approach to develop prevention strategies to control such carcinogenic natural metabolites that are prevalent in many staple food products. The model could also be used to predict the impact of climate change on toxin production. PMID:21880616

  8. Prediction of Injuries and Injury Types in Army Basic Training, Infantry, Armor, and Cavalry Trainees Using a Common Fitness Screen.

    PubMed

    Sefton, JoEllen M; Lohse, K R; McAdam, J S

    2016-11-01

     Musculoskeletal injuries (MSIs) are among the most important challenges facing our military. They influence career success and directly affect military readiness. Several methods of screening initial entry training (IET) soldiers are being tested in an effort to predict which soldiers will sustain an MSI and to develop injury-prevention programs. The Army 1-1-1 Fitness Assessment was examined to determine if it could be used as a screening and MSI prediction mechanism in male IET soldiers.  To determine if a relationship existed among the Army 1-1-1 Fitness Assessment results and MSI, MSI type, and program of instruction (POI) in male IET soldiers.  Retrospective cohort study.  Fort Benning, Georgia.  Male Army IET soldiers (N = 1788).  The likelihood of sustaining acute and overuse MSI was modelled using separate logistic regression analyses. The POI, run time, push-ups and sit-ups (combined into a single score), and IET soldier age were tested as predictors in a series of linear models.  With POI controlled, slower run time, fewer push-ups and sit-ups, and older age were positively correlated with acute MSI; only slower run time was correlated with overuse MSI. For both MSI types, cavalry POIs had a higher risk of acute and overuse MSIs than did basic combat training, armor, or infantry POIs.  The 1-1-1 Fitness Assessment predicted both the likelihood of MSI occurrence and type of MSI (acute or overuse). One-mile (1.6-km) run time predicted both overuse and acute MSIs, whereas the combined push-up and sit-up score predicted only acute MSIs. The MSIs varied by type of training (infantry, basic, armor, cavalry), which allowed the development of prediction equations by POI. We determined 1-1-1 Fitness Assessment cutoff scores for each event, thereby allowing the evaluation to be used as an MSI screening mechanism for IET soldiers.

  9. Prediction of Injuries and Injury Types in Army Basic Training, Infantry, Armor, and Cavalry Trainees Using a Common Fitness Screen

    PubMed Central

    Sefton, JoEllen M.; Lohse, K. R.; McAdam, J. S.

    2016-01-01

    Context: Musculoskeletal injuries (MSIs) are among the most important challenges facing our military. They influence career success and directly affect military readiness. Several methods of screening initial entry training (IET) soldiers are being tested in an effort to predict which soldiers will sustain an MSI and to develop injury-prevention programs. The Army 1-1-1 Fitness Assessment was examined to determine if it could be used as a screening and MSI prediction mechanism in male IET soldiers. Objective: To determine if a relationship existed among the Army 1-1-1 Fitness Assessment results and MSI, MSI type, and program of instruction (POI) in male IET soldiers. Design: Retrospective cohort study. Setting: Fort Benning, Georgia. Patients or Other Participants: Male Army IET soldiers (N = 1788). Main Outcome Measure(s): The likelihood of sustaining acute and overuse MSI was modelled using separate logistic regression analyses. The POI, run time, push-ups and sit-ups (combined into a single score), and IET soldier age were tested as predictors in a series of linear models. Results: With POI controlled, slower run time, fewer push-ups and sit-ups, and older age were positively correlated with acute MSI; only slower run time was correlated with overuse MSI. For both MSI types, cavalry POIs had a higher risk of acute and overuse MSIs than did basic combat training, armor, or infantry POIs. Conclusions: The 1-1-1 Fitness Assessment predicted both the likelihood of MSI occurrence and type of MSI (acute or overuse). One-mile (1.6-km) run time predicted both overuse and acute MSIs, whereas the combined push-up and sit-up score predicted only acute MSIs. The MSIs varied by type of training (infantry, basic, armor, cavalry), which allowed the development of prediction equations by POI. We determined 1-1-1 Fitness Assessment cutoff scores for each event, thereby allowing the evaluation to be used as an MSI screening mechanism for IET soldiers. PMID:28068160

  10. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  11. A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omori, Y.; Chown, R.; Simard, G.

    Here, we present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and Planck temperature data. The 150 GHz temperature data from the 2500 deg 2 SPT-SZ survey is combined with the Planck 143 GHz data in harmonic space to obtain a temperature map that has a broader ℓ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potentialmore » $${C}_{L}^{\\phi \\phi }$$, and compare it to the theoretical prediction for a ΛCDM cosmology consistent with the Planck 2015 data set, finding a best-fit amplitude of $${0.95}_{-0.06}^{+0.06}(\\mathrm{stat}.{)}_{-0.01}^{+0.01}(\\mathrm{sys}.)$$. The null hypothesis of no lensing is rejected at a significance of 24σ. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $${C}_{L}^{\\phi G}$$, between the SPT+Planck lensing map and Wide-field Infrared Survey Explorer (WISE) galaxies. We fit $${C}_{L}^{\\phi G}$$ to a power law of the form $${p}_{L}=a{(L/{L}_{0})}^{-b}$$ with a, L 0, and b fixed, and find $${\\eta }^{\\phi G}={C}_{L}^{\\phi G}/{p}_{L}={0.94}_{-0.04}^{+0.04}$$, which is marginally lower, but in good agreement with $${\\eta }^{\\phi G}={1.00}_{-0.01}^{+0.02}$$, the best-fit amplitude for the cross-correlation of Planck-2015 CMB lensing and WISE galaxies over ~67% of the sky. Finally, the lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey, whose footprint nearly completely covers the SPT 2500 deg 2 field.« less

  12. A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data

    DOE PAGES

    Omori, Y.; Chown, R.; Simard, G.; ...

    2017-11-07

    Here, we present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and Planck temperature data. The 150 GHz temperature data from the 2500 deg 2 SPT-SZ survey is combined with the Planck 143 GHz data in harmonic space to obtain a temperature map that has a broader ℓ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potentialmore » $${C}_{L}^{\\phi \\phi }$$, and compare it to the theoretical prediction for a ΛCDM cosmology consistent with the Planck 2015 data set, finding a best-fit amplitude of $${0.95}_{-0.06}^{+0.06}(\\mathrm{stat}.{)}_{-0.01}^{+0.01}(\\mathrm{sys}.)$$. The null hypothesis of no lensing is rejected at a significance of 24σ. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $${C}_{L}^{\\phi G}$$, between the SPT+Planck lensing map and Wide-field Infrared Survey Explorer (WISE) galaxies. We fit $${C}_{L}^{\\phi G}$$ to a power law of the form $${p}_{L}=a{(L/{L}_{0})}^{-b}$$ with a, L 0, and b fixed, and find $${\\eta }^{\\phi G}={C}_{L}^{\\phi G}/{p}_{L}={0.94}_{-0.04}^{+0.04}$$, which is marginally lower, but in good agreement with $${\\eta }^{\\phi G}={1.00}_{-0.01}^{+0.02}$$, the best-fit amplitude for the cross-correlation of Planck-2015 CMB lensing and WISE galaxies over ~67% of the sky. Finally, the lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey, whose footprint nearly completely covers the SPT 2500 deg 2 field.« less

  13. Fecal immunochemical tests in combination with blood tests for colorectal cancer and advanced adenoma detection—systematic review

    PubMed Central

    Niedermaier, Tobias; Weigl, Korbinian; Hoffmeister, Michael; Brenner, Hermann

    2017-01-01

    Background Colorectal cancer (CRC) is a common but largely preventable cancer. Although fecal immunochemical tests (FITs) detect the majority of CRCs, they miss some of the cancers and most advanced adenomas (AAs). The potential of blood tests in complementing FITs for the detection of CRC or AA has not yet been systematically investigated. Methods We conducted a systematic review of performance of FIT combined with an additional blood test for CRC and AA detection versus FIT alone. PubMed and Web of Science were searched until June 9, 2017. Results Some markers substantially increased sensitivity for CRC when combined with FIT, albeit typically at a major loss of specificity. For AA, no relevant increase in sensitivity could be achieved. Conclusion Combining FIT and blood tests might be a promising approach to enhance sensitivity of CRC screening, but comprehensive evaluation of promising marker combinations in screening populations is needed. PMID:29435309

  14. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  15. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  16. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  17. Push-off tests and strength evaluation of joints combining shrink fitting with bonding

    NASA Astrophysics Data System (ADS)

    Yoneno, Masahiro; Sawa, Toshiyuki; Shimotakahara, Ken; Motegi, Yoichi

    1997-03-01

    Shrink fitted joints have been used in mechanical structures. Recently, joints combining shrink fitting with anaerobic adhesives bonded between the shrink fitted surfaces have been appeared in order to increase the joint strength. In this paper, push-off test was carried out on strength of joints combining shrink fitting with bonding by material testing machine. In addition, the push-off strength of shrink fitting joints without an anaerobic adhesive was also measured. In the experiments, the effects of the shrinking allowance and the outer diameter of the rings on the joint strength are examined. The interface stress distribution in bonded shrink fitted joints subjected to a push-off load is analyzed using axisymmetrical theory of elasticity as a four-body contact problem. Using the interface stress distribution, a method for estimating joint strength is proposed. The experimental results are in a fairly good agreement with the numerical results. It is found that the strength of combination joints is greater than that of shrink fitted joints.

  18. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  19. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  1. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  2. [Combined application of multiple fluorescence in research on the degradation of fluoranthene by potassium ferrate].

    PubMed

    Li, Si; Yu, Dan-Ni; Ji, Fang-Ying; Zhou, Guang-Ming; He, Qiang

    2012-11-01

    The degradation of fluoranthene was researched by combined means of multiple fluorescence spectra, including emission, synchronous, excitation emission matrix (EEM), time-scan and photometry. The characteristics of the degradation and fluoranthene molecular changes within the degradation's process were also discussed according to the information about the degradation provided by all of the fluorescence spectra mentioned above. The equations of fluoranthene's degradation by potassium ferrate were obtained on the bases of fitting time-scan fluorescence curves at different time, and the degradation's kinetic was speculated accordingly. From the experimental results, multiple fluorescence data commonly reflected that it had same degradation rate at the same reaction time. t = 10 s, and the degradation rate is -55%, t = 25 s, -81%, t = 40 s, -91%. No new fluorescent characteristic was observed within every degradation' stage. The reaction stage during t < or = 20 s was crucial, in which the degradation process is closest to linear relationship. After this beginning stage, the linear relationship deviated gradually with the development of the degradation process. The degradation of fluoranthene by potassium ferrate was nearly in accord with the order of the first order reaction.

  3. Recycling slaughterhouse waste into fertilizer: how do pyrolysis temperature and biomass additions affect phosphorus availability and chemistry?

    PubMed

    Zwetsloot, Marie J; Lehmann, Johannes; Solomon, Dawit

    2015-01-01

    Pyrolysis of slaughterhouse waste could promote more sustainable phosphorus (P) usage through the development of alternative P fertilizers. This study investigated how pyrolysis temperature (220, 350, 550 and 750 °C), rendering before pyrolysis, and wood or corn biomass additions affect P chemistry in bone char, plant availability, and its potential as P fertilizer. Linear combination fitting of synchrotron-based X-ray absorption near edge structure spectra demonstrated that higher pyrolysis temperatures decreased the fit with organic P references, but increased the fit with a hydroxyapatite (HA) reference, used as an indicator of high calcium phosphate (CaP) crystallinity. The fit to the HA reference increased from 0% to 69% in bone with meat residue and from 20% to 95% in rendered bone. Biomass additions to the bone with meat residue reduced the fit to the HA reference by 83% for wood and 95% for corn, and additions to rendered bone by 37% for wood. No detectable aromatic P forms were generated by pyrolysis. High CaP crystallinity was correlated with low water-extractable P, but high formic acid-extractable P indicative of high plant availability. Bone char supplied available P which was only 24% lower than Triple Superphosphate fertilizer and two- to five-fold higher than rock phosphate. Pyrolysis temperature and biomass additions can be used to design P fertilizer characteristics of bone char through changing CaP crystallinity that optimize P availability to plants. © 2014 Society of Chemical Industry.

  4. A Combined SRTM Digital Elevation Model for Zanjan State of Iran Based on the Corrective Surface Idea

    NASA Astrophysics Data System (ADS)

    Kiamehr, Ramin

    2016-04-01

    One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.

  5. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    PubMed

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  6. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  7. Analyzing CRISM Data from mound B in Juventae Chasma, Mars, with the Multiple-Endmember Linear Spectral Unmixing Model MELSUM

    NASA Astrophysics Data System (ADS)

    Wendt, L.; Gross, C.; McGuire, P. C.; Combe, J.-P.; Neukum, G.

    2009-04-01

    Juventae Chasma, just north of Valles Marineris on Mars, contains several light-toned deposits (LTD), one of which is labelled mound B. Based on IR data from the imaging spectrometer OMEGA on Mars Express,[1] suggested kieserite for the lower part and gypsum for the upper part of the mound. In this study, we analyzed NIR data from the Compact Reconnaissance Imaging Spectrometer CRISM on MRO with the Multiple-Endmember Linear Spectral Unmixing Model MELSUM developed by Combe et al.[2]. We used CRISM data product FRT00009C0A from 1 to 2.6 µm. A novel, time-dependent volcano-scan technique [3] was applied to remove absorption bands related to CO2 much more effectively than the volcano-scan technique [4] that has been applied to CRISM and OMEGA data so far. In the classic SMA, a solution for the measured spectrum is calculated by a linear combination of all input spectra (which may come from a spectral library or from the image itself) at once. This can lead to negative coefficients, which have no physical meaning. MELSUM avoids this by calculating a solution for each possible combination of a subset of the reference spectra, with the maximum number of library spectra in the subset defined by the user. The solution with the lowest residual to the input spectrum is returned. We used MELSUM in a first step as similarity measure within the image by using averaged spectra from the image itself as input to MELSUM. This showed that three spectral units are enough to describe the variability in the data to first order: A lower, light-toned unit, an upper light-toned unit and a dark-toned unit. We then chose 34 laboratory spectra of sulfates, mafic minerals and iron oxides plus a spectrum for H2O ice as reference spectra for the unmixing of averaged spectra for each of these spectral regions. The best fit for the dark material was a combination of olivine, pyroxene and ice (present as cloud in the atmosphere and not on the surface). In agreement with [5], The lower unit was best modeled by a mix of the monohydrated sulfates szomolnokite and kieserite plus olivine and ice. The upper unit fits best with a combination of romerite, rozenite, (two polyhydrated iron sulfates) olivine and ice. Gypsum is not present. The excellent fit between modeled and measured spectra demonstrates the effectiveness of MELSUM as a tool to analyze hyperspectral data from CRISM. This research has been supported by the Helmholtz Association through the research alliance "Planetary Evolution and Life" and the German Space Agency under the Mars Express programme. References: [1] Gendrin, A. et al., (2005), Science, 307, 5751, 1587-1591 [2] Combe. J.-P. et al., (2008), PSS, 56, 951-975. [3] McGuire et al., (2009), in preparation), "A new volcano-scan algorithm for atmospheric correction of CRISM and OMEGA spectral data". [4] Langevin et al., (2005), Science, 307 (5715), 1584-1586. [5] Bishop, J. L. et al., (2008) LPSC XXXIX, #1391.

  8. Optimizing complex phenotypes through model-guided multiplex genome engineering

    DOE PAGES

    Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...

    2017-05-25

    Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.

  9. Optimizing complex phenotypes through model-guided multiplex genome engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.

    Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hao; Zhang, Guifu; Zhao, Kun

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  11. ACCELERATED FITTING OF STELLAR SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less

  12. A model for the influences of soluble and insoluble solids, and treated volume on the ultraviolet-C resistance of heat-stressed Salmonella enterica in simulated fruit juices.

    PubMed

    Estilo, Emil Emmanuel C; Gabriel, Alonzo A

    2018-02-01

    This study was conducted to determine the effects of intrinsic juice characteristics namely insoluble solids (IS, 0-3 %w/v), and soluble solids (SS, 0-70 °Brix), and extrinsic process parameter treated volume (250-1000 mL) on the UV-C inactivation rates of heat-stressed Salmonella enterica in simulated fruit juices (SFJs). A Rotatable Central Composite Design of Experiment (CCRD) was used to determine combinations of the test variables, while Response Surface Methodology (RSM) was used to characterize and quantify the influences of the test variables on microbial inactivation. The heat-stressed cells exhibited log-linear UV-C inactivation behavior (R 2 0.952 to 0.999) in all CCRD combinations with D UV-C values ranging from 10.0 to 80.2 mJ/cm 2 . The D UV-C values obtained from the CCRD significantly fitted into a quadratic model (P < 0.0001). RSM results showed that individual linear (IS, SS, volume), individual quadratic (IS 2 and volume 2 ), and factor interactions (IS × volume and SS × volume) were found to significantly influence UV-C inactivation. Validation of the model in SFJs with combinations not included in the CCRD showed that the predictions were within acceptable error margins. Copyright © 2017. Published by Elsevier Ltd.

  13. Model-Free CUSUM Methods for Person Fit

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Shi, Min

    2009-01-01

    This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…

  14. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  16. Analysis technique for controlling system wavefront error with active/adaptive optics

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  17. Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.

    PubMed

    Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z

    2017-03-01

    A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.

  18. Inference of gene regulatory networks from genome-wide knockout fitness data

    PubMed Central

    Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.

    2013-01-01

    Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269

  19. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    PubMed

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  20. Representativeness of the ground observational sites and up-scaling of the point soil moisture measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jinlei; Wen, Jun; Tian, Hui

    2016-02-01

    Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.

  1. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  2. Low energy constants of S U ( 2 ) partially quenched chiral perturbation theory from N f = 2 + 1 domain wall QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyle, P. A.; Christ, N. H.; Garron, N.

    2016-03-09

    Here, we have performed fits of the pseudoscalar masses and decay constants, from a variety of the RBC-UKQCD Collaboration’s domain wall fermion ensembles, to SU(2) partially quenched chiral perturbation theory at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO). We report values for 9 NLO and 8 linearly independent combinations of NNLO partially quenched low-energy constants, which we compare to other lattice and phenomenological determinations. We discuss the size of successive terms in the chiral expansion and use our large set of low-energy constants to make predictions for mass splittings due to QCD isospin-breaking effects and the S-wave ππ scattering lengths.more » Lastly, we conclude that, for the range of pseudoscalar masses explored in this work, 115 MeV≲mPS≲430 MeV, the NNLO SU(2) expansion is quite robust and can fit lattice data with percent-scale accuracy.« less

  3. Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin

    NASA Astrophysics Data System (ADS)

    He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu

    2017-06-01

    This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.

  4. Multivariate meta-analysis using individual participant data

    PubMed Central

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2016-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  5. Surface complexation modeling of zinc sorption onto ferrihydrite.

    PubMed

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength dependence of sorption. The results of this research and previous work with Pb(II) indicate that the existing thermodynamic framework for the modified TLM is able to reproduce the metal sorption data only over a limited range of conditions. For this reason, much work still needs to be done in fine-tuning the thermodynamic framework and databases for the TLM.

  6. An approximation of herd effect due to vaccinating children against seasonal influenza – a potential solution to the incorporation of indirect effects into static models

    PubMed Central

    2013-01-01

    Background Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Methods Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. Results The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. Conclusions This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses. PMID:23339290

  7. Copper Nanoparticle Induced Cytotoxicity to Nitrifying Bacteria ...

    EPA Pesticide Factsheets

    With the inclusion of engineered nanomaterials in industrial processes and consumer products, wastewater treatments plants (WWTPs) will serve as a major sink for these emerging contaminants. Previous research has demonstrated that nanomaterials are potentially toxic to microbial communities utilized in biological wastewater treatment (BWT). Copper-based nanoparticles (CuNPs) are of particular interest based on their increasing use in wood treatment, paints, household products, coatings, and byproducts of semiconductor manufacturing. A critical step in BWT is nutrient removal via denitrification. This study examined the potential toxicity of bare and polyvinylpyrrolidone (PVP) coated CuO, and Cu2O nanoparticles, as well as Cu ions to microbial communities responsible for nitrogen removal in BWT. Inhibition was inferred from changes to the specific oxygen uptake rate (sOUR) in the absence and presence of Cu ions and CuNPs. X-ray absorption fine structure spectroscopy, with Linear Combination Fitting (LCF), was utilized to track changes to Cu speciation throughout exposure. Results indicate that the dissolution of Cu ions from CuNPs drive microbial inhibition. The presence of a PVP coating on CuNPs has little effect on inhibition. LCF fitting of the biomass combined with metal partitioning analysis supports the current hypothesis that Cu-induced cytotoxicity is primarily caused by reactive oxygen species formed from ionic Cu in solution via catalytic reaction inter

  8. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  9. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  10. Mixture Model for Determination of Shock Equation of State

    DTIC Science & Technology

    2012-07-25

    not considered in this paper. III. COMPARISON WITH EXPERIMENTAL DATA A. Two-constituent composites 1. Uranium- rhodium composite Uranium- rhodium (U...sound speed, Co, and S were determined from linear least squares fit to the available data22 as shown in Figs. 1(a) and 1(b) for uranium and rhodium ...overpredicts the experimental data, with an average deviation, dUs,/Us of 0.05, shown in Fig. 2(b). The linear fits for uranium and rhodium are shown for

  11. Arsenic species in weathering mine tailings and biogenic solids at the Lava Cap Mine Superfund Site, Nevada City, CA

    PubMed Central

    2011-01-01

    Background A realistic estimation of the health risk of human exposure to solid-phase arsenic (As) derived from historic mining operations is a major challenge to redevelopment of California's famed "Mother Lode" region. Arsenic, a known carcinogen, occurs in multiple solid forms that vary in bioaccessibility. X-ray absorption fine-structure spectroscopy (XAFS) was used to identify and quantify the forms of As in mine wastes and biogenic solids at the Lava Cap Mine Superfund (LCMS) site, a historic "Mother Lode" gold mine. Principal component analysis (PCA) was used to assess variance within water chemistry, solids chemistry, and XAFS spectral datasets. Linear combination, least-squares fits constrained in part by PCA results were then used to quantify arsenic speciation in XAFS spectra of tailings and biogenic solids. Results The highest dissolved arsenic concentrations were found in Lost Lake porewater and in a groundwater-fed pond in the tailings deposition area. Iron, dissolved oxygen, alkalinity, specific conductivity, and As were the major variables in the water chemistry PCA. Arsenic was, on average, 14 times more concentrated in biologically-produced iron (hydr)oxide than in mine tailings. Phosphorous, manganese, calcium, aluminum, and As were the major variables in the solids chemistry PCA. Linear combination fits to XAFS spectra indicate that arsenopyrite (FeAsS), the dominant form of As in ore material, remains abundant (average: 65%) in minimally-weathered ore samples and water-saturated tailings at the bottom of Lost Lake. However, tailings that underwent drying and wetting cycles contain an average of only 30% arsenopyrite. The predominant products of arsenopyrite weathering were identified by XAFS to be As-bearing Fe (hydr)oxide and arseniosiderite (Ca2Fe(AsO4)3O3•3H2O). Existence of the former species is not in question, but the presence of the latter species was not confirmed by additional measurements, so its identification is less certain. The linear combination, least-squares fits totals of several samples deviate by more than ± 20% from 100%, suggesting that additional phases may be present that were not identified or evaluated in this study. Conclusions Sub- to anoxic conditions minimize dissolution of arsenopyrite at the LCMS site, but may accelerate the dissolution of As-bearing secondary iron phases such as Fe3+-oxyhydroxides and arseniosiderite, if sufficient organic matter is present to spur anaerobic microbial activity. Oxidizing, dry conditions favor the stabilization of secondary phases, while promoting oxidative breakdown of the primary sulfides. The stability of both primary and secondary As phases is likely to be at a minimum under cyclic wet-dry conditions. Biogenic iron (hydr)oxide flocs can sequester significant amounts of arsenic; this property may be useful for treatment of perpetual sources of As such as mine adit water, but the fate of As associated with natural accumulations of floc material needs to be assessed. PMID:21261983

  12. Arsenic species in weathering mine tailings and biogenic solids at the Lava Cap Mine Superfund Site, Nevada City, CA.

    PubMed

    Foster, Andrea L; Ashley, Roger P; Rytuba, James J

    2011-01-24

    A realistic estimation of the health risk of human exposure to solid-phase arsenic (As) derived from historic mining operations is a major challenge to redevelopment of California's famed "Mother Lode" region. Arsenic, a known carcinogen, occurs in multiple solid forms that vary in bioaccessibility. X-ray absorption fine-structure spectroscopy (XAFS) was used to identify and quantify the forms of As in mine wastes and biogenic solids at the Lava Cap Mine Superfund (LCMS) site, a historic "Mother Lode" gold mine. Principal component analysis (PCA) was used to assess variance within water chemistry, solids chemistry, and XAFS spectral datasets. Linear combination, least-squares fits constrained in part by PCA results were then used to quantify arsenic speciation in XAFS spectra of tailings and biogenic solids. The highest dissolved arsenic concentrations were found in Lost Lake porewater and in a groundwater-fed pond in the tailings deposition area. Iron, dissolved oxygen, alkalinity, specific conductivity, and As were the major variables in the water chemistry PCA. Arsenic was, on average, 14 times more concentrated in biologically-produced iron (hydr)oxide than in mine tailings. Phosphorous, manganese, calcium, aluminum, and As were the major variables in the solids chemistry PCA. Linear combination fits to XAFS spectra indicate that arsenopyrite (FeAsS), the dominant form of As in ore material, remains abundant (average: 65%) in minimally-weathered ore samples and water-saturated tailings at the bottom of Lost Lake. However, tailings that underwent drying and wetting cycles contain an average of only 30% arsenopyrite. The predominant products of arsenopyrite weathering were identified by XAFS to be As-bearing Fe (hydr)oxide and arseniosiderite (Ca2Fe(AsO4)3O3•3H2O). Existence of the former species is not in question, but the presence of the latter species was not confirmed by additional measurements, so its identification is less certain. The linear combination, least-squares fits totals of several samples deviate by more than ± 20% from 100%, suggesting that additional phases may be present that were not identified or evaluated in this study. Sub- to anoxic conditions minimize dissolution of arsenopyrite at the LCMS site, but may accelerate the dissolution of As-bearing secondary iron phases such as Fe3+-oxyhydroxides and arseniosiderite, if sufficient organic matter is present to spur anaerobic microbial activity. Oxidizing, dry conditions favor the stabilization of secondary phases, while promoting oxidative breakdown of the primary sulfides. The stability of both primary and secondary As phases is likely to be at a minimum under cyclic wet-dry conditions. Biogenic iron (hydr)oxide flocs can sequester significant amounts of arsenic; this property may be useful for treatment of perpetual sources of As such as mine adit water, but the fate of As associated with natural accumulations of floc material needs to be assessed.

  13. Modeling of boldine alkaloid adsorption onto pure and propyl-sulfonic acid-modified mesoporous silicas. A comparative study.

    PubMed

    Geszke-Moritz, Małgorzata; Moritz, Michał

    2016-12-01

    The present study deals with the adsorption of boldine onto pure and propyl-sulfonic acid-functionalized SBA-15, SBA-16 and mesocellular foam (MCF) materials. Siliceous adsorbents were characterized by nitrogen sorption analysis, transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier-transform infrared (FT-IR) spectroscopy and thermogravimetric analysis. The equilibrium adsorption data were analyzed using the Langmuir, Freundlich, Redlich-Peterson, and Temkin isotherms. Moreover, the Dubinin-Radushkevich and Dubinin-Astakhov isotherm models based on the Polanyi adsorption potential were employed. The latter was calculated using two alternative formulas including solubility-normalized (S-model) and empirical C-model. In order to find the best-fit isotherm, both linear regression and nonlinear fitting analysis were carried out. The Dubinin-Astakhov (S-model) isotherm revealed the best fit to the experimental points for adsorption of boldine onto pure mesoporous materials using both linear and nonlinear fitting analysis. Meanwhile, the process of boldine sorption onto modified silicas was described the best by the Langmuir and Temkin isotherms using linear regression and nonlinear fitting analysis, respectively. The values of adsorption energy (below 8kJ/mol) indicate the physical nature of boldine adsorption onto unmodified silicas whereas the ionic interactions seem to be the main force of alkaloid adsorption onto functionalized sorbents (energy of adsorption above 8kJ/mol). Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure

    NASA Astrophysics Data System (ADS)

    Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.

    2016-08-01

    Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.

  15. SU-F-T-130: [18F]-FDG Uptake Dose Response in Lung Correlates Linearly with Proton Therapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D; Titt, U; Mirkovic, D

    2016-06-15

    Purpose: Analysis of clinical outcomes in lung cancer patients treated with protons using 18F-FDG uptake in lung as a measure of dose response. Methods: A test case lung cancer patient was selected in an unbiased way. The test patient’s treatment planning and post treatment positron emission tomography (PET) were collected from picture archiving and communication system at the UT M.D. Anderson Cancer Center. Average computerized tomography scan was registered with post PET/CT through both rigid and deformable registrations for selected region of interest (ROI) via VelocityAI imaging informatics software. For the voxels in the ROI, a system that extracts themore » Standard Uptake Value (SUV) from PET was developed, and the corresponding relative biological effectiveness (RBE) weighted (both variable and constant) dose was computed using the Monte Carlo (MC) methods. The treatment planning system (TPS) dose was also obtained. Using histogram analysis, the voxel average normalized SUV vs. 3 different doses was obtained and linear regression fit was performed. Results: From the registration process, there were some regions that showed significant artifacts near the diaphragm and heart region, which yielded poor r-squared values when the linear regression fit was performed on normalized SUV vs. dose. Excluding these values, TPS fit yielded mean r-squared value of 0.79 (range 0.61–0.95), constant RBE fit yielded 0.79 (range 0.52–0.94), and variable RBE fit yielded 0.80 (range 0.52–0.94). Conclusion: A system that extracts SUV from PET to correlate between normalized SUV and various dose calculations was developed. A linear relation between normalized SUV and all three different doses was found.« less

  16. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  17. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  18. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  19. An Inquiry-Based Linear Algebra Class

    ERIC Educational Resources Information Center

    Wang, Haohao; Posey, Lisa

    2011-01-01

    Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…

  20. Ion strength limit of computed excess functions based on the linearized Poisson-Boltzmann equation.

    PubMed

    Fraenkel, Dan

    2015-12-05

    The linearized Poisson-Boltzmann (L-PB) equation is examined for its κ-range of validity (κ, Debye reciprocal length). This is done for the Debye-Hückel (DH) theory, i.e., using a single ion size, and for the SiS treatment (D. Fraenkel, Mol. Phys. 2010, 108, 1435), which extends the DH theory to the case of ion-size dissimilarity (therefore dubbed DH-SiS). The linearization of the PB equation has been claimed responsible for the DH theory's failure to fit with experiment at > 0.1 m; but DH-SiS fits with data of the mean ionic activity coefficient, γ± (molal), against m, even at m > 1 (κ > 0.33 Å(-1) ). The SiS expressions combine the overall extra-electrostatic potential energy of the smaller ion, as central ion-Ψa>b (κ), with that of the larger ion, as central ion-Ψb>a (κ); a and b are, respectively, the counterion and co-ion distances of closest approach. Ψa>b and Ψb>a are derived from the L-PB equation, which appears to conflict with their being effective up to moderate electrolyte concentrations (≈1 m). However, the L-PB equation can be valid up to κ ≥ 1.3 Å(-1) if one abandons the 1/κ criterion for its effectiveness and, instead, use, as criterion, the mean-field electrostatic interaction potential of the central ion with its ion cloud, at a radial distance dividing the cloud charge into two equal parts. The DH theory's failure is, thus, not because of using the L-PB equation; the lethal approximation is assigning a single size to the positive and negative ions. © 2015 Wiley Periodicals, Inc.

  1. Anthropometric measures as fitness indicators in primary school children: The Health Oriented Pedagogical Project (HOPP).

    PubMed

    Mamen, Asgeir; Fredriksen, Per Morten

    2018-05-01

    As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.

  2. Speciation of Soil Phosphorus Assessed by XANES Spectroscopy at Different Spatial Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hesterberg, Dean; McNulty, Ian; Thieme, Juergen

    Precise management of soil phosphorus (P) to meet competing demands of agriculture and environmental protection can benefit from more comprehensive characterization of P speciation in soils. Our objectives were to provide spatial context for spectroscopic analyses of soil P speciation in relation to molecular-scale species and landscape-scale management of P, and to compare soil P-species diversity from spectroscopic measurements at submicron and millimeter scales. The spatial range of ~26 orders of magnitude between atomic and field scales presents a challenge to upscaling and downscaling information from spectroscopic analyses of soils. Scanning fluorescence X-ray microscopy images of a 50-mm ´ 45-mmmore » area of an organic soil sample showed heterogeneous distributions of P, Al, and Si. Microscale X-ray absorption near edge structure (μ-XANES) spectra collected at the P K-edge from 12 spots on the soil sample exhibited diverse features that indicated variations in highly localized P speciation. Linear combination fitting analysis of the μ-XANES spectra included various proportions of three standards that appeared in fits for most spots and five standards that appeared in fits for one spot each. The fit to a bulk-soil spectrum was dominated by two of the common standards in the μ-XANES fits, and a fit to the sum of m-XANES spectra included four of the standards. Lastly, these results illustrate a gain in P species sensitivity from spatially resolved XANES analysis. Integrating spectroscopic analyses from multiple scales determines soil P species diversity and will ultimately help connect speciation to the chemical reactivity and mobility of P in soils.« less

  3. Speciation of Soil Phosphorus Assessed by XANES Spectroscopy at Different Spatial Scales

    DOE PAGES

    Hesterberg, Dean; McNulty, Ian; Thieme, Juergen

    2017-07-27

    Precise management of soil phosphorus (P) to meet competing demands of agriculture and environmental protection can benefit from more comprehensive characterization of P speciation in soils. Our objectives were to provide spatial context for spectroscopic analyses of soil P speciation in relation to molecular-scale species and landscape-scale management of P, and to compare soil P-species diversity from spectroscopic measurements at submicron and millimeter scales. The spatial range of ~26 orders of magnitude between atomic and field scales presents a challenge to upscaling and downscaling information from spectroscopic analyses of soils. Scanning fluorescence X-ray microscopy images of a 50-mm ´ 45-mmmore » area of an organic soil sample showed heterogeneous distributions of P, Al, and Si. Microscale X-ray absorption near edge structure (μ-XANES) spectra collected at the P K-edge from 12 spots on the soil sample exhibited diverse features that indicated variations in highly localized P speciation. Linear combination fitting analysis of the μ-XANES spectra included various proportions of three standards that appeared in fits for most spots and five standards that appeared in fits for one spot each. The fit to a bulk-soil spectrum was dominated by two of the common standards in the μ-XANES fits, and a fit to the sum of m-XANES spectra included four of the standards. Lastly, these results illustrate a gain in P species sensitivity from spatially resolved XANES analysis. Integrating spectroscopic analyses from multiple scales determines soil P species diversity and will ultimately help connect speciation to the chemical reactivity and mobility of P in soils.« less

  4. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  5. Separation of detector non-linearity issues and multiple ionization satellites in alpha-particle PIXE

    NASA Astrophysics Data System (ADS)

    Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.

    2018-01-01

    Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.

  6. Potential Fifty Percent Reduction in Saturation Diving Decompression Time Using a Combination of Intermittent Recompression and Exercise

    NASA Technical Reports Server (NTRS)

    Gernhardt, Michael I.; Abercromby, Andrew; Conklin, Johnny

    2007-01-01

    Conventional saturation decompression protocols use linear decompression rates that become progressively slower at shallower depths, consistent with free gas phase control vs. dissolved gas elimination kinetics. If decompression is limited by control of free gas phase, linear decompression is an inefficient strategy. The NASA prebreathe reduction program demonstrated that exercise during O2 prebreathe resulted in a 50% reduction (2 h vs. 4 h) in the saturation decompression time from 14.7 to 4.3 psi and a significant reduction in decompression sickness (DCS: 0 vs. 23.7%). Combining exercise with intermittent recompression, which controls gas phase growth and eliminates supersaturation before exercising, may enable more efficient saturation decompression schedules. A tissue bubble dynamics model (TBDM) was used in conjunction with a NASA exercise prebreathe model (NEPM) that relates tissue inert gas exchange rate constants to exercise (ml O2/kg-min), to develop a schedule for decompression from helium saturation at 400 fsw. The models provide significant prediction (p < 0.001) and goodness of fit with 430 cases of DCS in 6437 laboratory dives for TBDM (p = 0.77) and with 22 cases of DCS in 159 altitude exposures for NEPM (p = 0.70). The models have also been used operationally in over 25,000 dives (TBDM) and 40 spacewalks (NEPM). The standard U.S. Navy (USN) linear saturation decompression schedule from saturation at 400 fsw required 114.5 h with a maximum Bubble Growth Index (BGI(sub max)) of 17.5. Decompression using intermittent recompression combined with two 10 min exercise periods (75% VO2 (sub peak)) per day required 54.25 h (BGI(sub max): 14.7). Combined intermittent recompression and exercise resulted in a theoretical 53% (2.5 day) reduction in decompression time and theoretically lower DCS risk compared to the standard USN decompression schedule. These results warrant future decompression trials to evaluate the efficacy of this approach.

  7. Estimation of Thalamocortical and Intracortical Network Models from Joint Thalamic Single-Electrode and Cortical Laminar-Electrode Recordings in the Rat Barrel System

    PubMed Central

    Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.

    2009-01-01

    A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875

  8. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  9. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    PubMed

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  10. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  11. A TVSCAD approach for image deblurring with impulsive noise

    NASA Astrophysics Data System (ADS)

    Gu, Guoyong; Jiang, Suhong; Yang, Junfeng

    2017-12-01

    We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).

  12. Quantitative analysis of Ni2+/Ni3+ in Li[NixMnyCoz]O2 cathode materials: Non-linear least-squares fitting of XPS spectra

    NASA Astrophysics Data System (ADS)

    Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng

    2018-05-01

    Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.

  13. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  14. Temperature dependence of elastic and strength properties of T300/5208 graphite-epoxy

    NASA Technical Reports Server (NTRS)

    Milkovich, S. M.; Herakovich, C. T.

    1984-01-01

    Experimental results are presented for the elastic and strength properties of T300/5208 graphite-epoxy at room temperature, 116K (-250 F), and 394K (+250 F). Results are presented for unidirectional 0, 90, and 45 degree laminates, and + or - 30, + or - 45, and + or - 60 degree angle-ply laminates. The stress-strain behavior of the 0 and 90 degree laminates is essentially linear for all three temperatures and that the stress-strain behavior of all other laminates is linear at 116K. A second-order curve provides the best fit for the temperature is linear at 116K. A second-order curve provides the best fit for the temperature dependence of the elastic modulus of all laminates and for the principal shear modulus. Poisson's ratio appears to vary linearly with temperature. all moduli decrease with increasing temperature except for E (sub 1) which exhibits a small increase. The strength temperature dependence is also quadratic for all laminates except the 0 degree - laminate which exhibits linear temperature dependence. In many cases the temperature dependence of properties is nearly linear.

  15. Tracking performance under time sharing conditions with a digit processing task: A feedback control theory analysis. [attention sharing effect on operator performance

    NASA Technical Reports Server (NTRS)

    Gopher, D.; Wickens, C. D.

    1975-01-01

    A one dimensional compensatory tracking task and a digit processing reaction time task were combined in a three phase experiment designed to investigate tracking performance in time sharing. Adaptive techniques, elaborate feedback devices, and on line standardization procedures were used to adjust task difficulty to the ability of each individual subject and manipulate time sharing demands. Feedback control analysis techniques were employed in the description of tracking performance. The experimental results show that when the dynamics of a system are constrained, in such a manner that man machine system stability is no longer a major concern of the operator, he tends to adopt a first order control describing function, even with tracking systems of higher order. Attention diversion to a concurrent task leads to an increase in remnant level, or nonlinear power. This decrease in linearity is reflected both in the output magnitude spectra of the subjects, and in the linear fit of the amplitude ratio functions.

  16. Does childhood motor skill proficiency predict adolescent fitness?

    PubMed

    Barnett, Lisa M; Van Beurden, Eric; Morgan, Philip J; Brooks, Lyndon O; Beard, John R

    2008-12-01

    To determine whether childhood fundamental motor skill proficiency predicts subsequent adolescent cardiorespiratory fitness. In 2000, children's proficiency in a battery of skills was assessed as part of an elementary school-based intervention. Participants were followed up during 2006/2007 as part of the Physical Activity and Skills Study, and cardiorespiratory fitness was measured using the Multistage Fitness Test. Linear regression was used to examine the relationship between childhood fundamental motor skill proficiency and adolescent cardiorespiratory fitness controlling for gender. Composite object control (kick, catch, throw) and locomotor skill (hop, side gallop, vertical jump) were constructed for analysis. A separate linear regression examined the ability of the sprint run to predict cardiorespiratory fitness. Of the 928 original intervention participants, 481 were in 28 schools, 276 (57%) of whom were assessed. Two hundred and forty-four students (88.4%) completed the fitness test. One hundred and twenty-seven were females (52.1%), 60.1% of whom were in grade 10 and 39.0% were in grade 11. As children, almost all 244 completed each motor assessments, except for the sprint run (n = 154, 55.8%). The mean composite skill score in 2000 was 17.7 (SD 5.1). In 2006/2007, the mean number of laps on the Multistage Fitness Test was 50.5 (SD 24.4). Object control proficiency in childhood, adjusting for gender (P = 0.000), was associated with adolescent cardiorespiratory fitness (P = 0.012), accounting for 26% of fitness variation. Children with good object control skills are more likely to become fit adolescents. Fundamental motor skill development in childhood may be an important component of interventions aiming to promote long-term fitness.

  17. Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)

    PubMed Central

    Meyer, Karin

    2008-01-01

    Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112

  18. Prediction Analysis for Measles Epidemics

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi

    2003-12-01

    A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.

  19. A growing social network model in geographical space

    NASA Astrophysics Data System (ADS)

    Antonioni, Alberto; Tomassini, Marco

    2017-09-01

    In this work we propose a new model for the generation of social networks that includes their often ignored spatial aspects. The model is a growing one and links are created either taking space into account, or disregarding space and only considering the degree of target nodes. These two effects can be mixed linearly in arbitrary proportions through a parameter. We numerically show that for a given range of the combination parameter, and for given mean degree, the generated network class shares many important statistical features with those observed in actual social networks, including the spatial dependence of connections. Moreover, we show that the model provides a good qualitative fit to some measured social networks.

  20. Predictors of Hearing-Aid Outcomes

    PubMed Central

    Johannesen, Peter T.; Pérez-González, Patricia; Blanco, José L.; Kalluri, Sridhar; Edwards, Brent

    2017-01-01

    Over 360 million people worldwide suffer from disabling hearing loss. Most of them can be treated with hearing aids. Unfortunately, performance with hearing aids and the benefit obtained from using them vary widely across users. Here, we investigate the reasons for such variability. Sixty-eight hearing-aid users or candidates were fitted bilaterally with nonlinear hearing aids using standard procedures. Treatment outcome was assessed by measuring aided speech intelligibility in a time-reversed two-talker background and self-reported improvement in hearing ability. Statistical predictive models of these outcomes were obtained using linear combinations of 19 predictors, including demographic and audiological data, indicators of cochlear mechanical dysfunction and auditory temporal processing skills, hearing-aid settings, working memory capacity, and pretreatment self-perceived hearing ability. Aided intelligibility tended to be better for younger hearing-aid users with good unaided intelligibility in quiet and with good temporal processing abilities. Intelligibility tended to improve by increasing amplification for low-intensity sounds and by using more linear amplification for high-intensity sounds. Self-reported improvement in hearing ability was hard to predict but tended to be smaller for users with better working memory capacity. Indicators of cochlear mechanical dysfunction, alone or in combination with hearing settings, did not affect outcome predictions. The results may be useful for improving hearing aids and setting patients’ expectations. PMID:28929903

  1. The relationship between compressive strength and flexural strength of pavement geopolymer grouting material

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.

    2018-01-01

    To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.

  2. Prediction of optimum sorption isotherm: comparison of linear and non-linear method.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-11-11

    Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  3. Equilibrium, kinetics and process design of acid yellow 132 adsorption onto red pine sawdust.

    PubMed

    Can, Mustafa

    2015-01-01

    Linear and non-linear regression procedures have been applied to the Langmuir, Freundlich, Tempkin, Dubinin-Radushkevich, and Redlich-Peterson isotherms for adsorption of acid yellow 132 (AY132) dye onto red pine (Pinus resinosa) sawdust. The effects of parameters such as particle size, stirring rate, contact time, dye concentration, adsorption dose, pH, and temperature were investigated, and interaction was characterized by Fourier transform infrared spectroscopy and field emission scanning electron microscope. The non-linear method of the Langmuir isotherm equation was found to be the best fitting model to the equilibrium data. The maximum monolayer adsorption capacity was found as 79.5 mg/g. The calculated thermodynamic results suggested that AY132 adsorption onto red pine sawdust was an exothermic, physisorption, and spontaneous process. Kinetics was analyzed by four different kinetic equations using non-linear regression analysis. The pseudo-second-order equation provides the best fit with experimental data.

  4. Combined pressure-thermal inactivation effect on spores in lu-wei beef--a traditional Chinese meat product.

    PubMed

    Wang, B-S; Li, B-S; Du, J-Z; Zeng, Q-X

    2015-08-01

    This study investigated the inactivation effect and kinetics of Bacillus coagulans and Geobacillus stearothermophilus spores suspended in lu-wei beef by combining high pressure (500 and 600 MPa) and moderate heat (70 and 80 °C or 80 and 90 °C). During pressurization, the temperature of pressure-transmitting fluid was tested with a K-type thermocouple, and the number of surviving cells was determined by a plate count method. The pressure come-up time and corresponding inactivation of Bacillus coagulans and G. stearothermophilus spores were considered during the pressure-thermal treatment. For the two types of spores, the results showed a higher inactivation effect in phosphate buffer solution than that in lu-wei beef. Among the bacteria evaluated, G. stearothermophilus spores had a higher resistance than B. coagulans spores during the pressure-thermal processing. One linear model and two nonlinear models (i.e. the Weibull and log-logistic models) were fitted to the survivor data to obtain relevant kinetic parameters, and the performance of these models was compared. The results suggested that the survival curve of the spores could be accurately described utilizing the log-logistic model, which produced the best fit for all inactivation data. The compression heating characteristics of different pressure-transmitting fluids should be considered when using high pressure to sterilize spores, particularly while the pressure is increasing. Spores can be inactivated by combining high pressure and moderate heat. The study demonstrates the synergistic inactivation effect of moderate heat in combination with high pressure in real-life food. The use of mathematical models to predict the inactivation for spores could help the food industry further to develop optimum process conditions. © 2015 The Society for Applied Microbiology.

  5. Maternal heterozygosity and progeny fitness association in an inbred Scots pine population.

    PubMed

    Abrahamsson, S; Ahlinder, J; Waldmann, P; García-Gil, M R

    2013-03-01

    Associations between heterozygosity and fitness traits have typically been investigated in populations characterized by low levels of inbreeding. We investigated the associations between standardized multilocus heterozygosity (stMLH) in mother trees (obtained from12 nuclear microsatellite markers) and five fitness traits measured in progenies from an inbred Scots pine population. The traits studied were proportion of sound seed, mean seed weight, germination rate, mean family height of one-year old seedlings under greenhouse conditions (GH) and mean family height of three-year old seedlings under field conditions (FH). The relatively high average inbreeding coefficient (F) in the population under study corresponds to a mixture of trees with different levels of co-ancestry, potentially resulting from a recent bottleneck. We used both frequentist and Bayesian methods of polynomial regression to investigate the presence of linear and non-linear relations between stMLH and each of the fitness traits. No significant associations were found for any of the traits except for GH, which displayed negative linear effect with stMLH. Negative HFC for GH could potentially be explained by the effect of heterosis caused by mating of two inbred mother trees (Lippman and Zamir 2006), or outbreeding depression at the most heterozygote trees and its negative impact on the fitness of the progeny, while their simultaneous action is also possible (Lynch. 1991). However,since this effect wasn't detected for FH, we cannot either rule out that the greenhouse conditions introduce artificial effects that disappear under more realistic field conditions.

  6. Neurovascular coupling in normal aging: a combined optical, ERP and fMRI study.

    PubMed

    Fabiani, Monica; Gordon, Brian A; Maclin, Edward L; Pearson, Melanie A; Brumback-Peltz, Carrie R; Low, Kathy A; McAuley, Edward; Sutton, Bradley P; Kramer, Arthur F; Gratton, Gabriele

    2014-01-15

    Brain aging is characterized by changes in both hemodynamic and neuronal responses, which may be influenced by the cardiorespiratory fitness of the individual. To investigate the relationship between neuronal and hemodynamic changes, we studied the brain activity elicited by visual stimulation (checkerboard reversals at different frequencies) in younger adults and in older adults varying in physical fitness. Four functional brain measures were used to compare neuronal and hemodynamic responses obtained from BA17: two reflecting neuronal activity (the event-related optical signal, EROS, and the C1 response of the ERP), and two reflecting functional hemodynamic changes (functional magnetic resonance imaging, fMRI, and near-infrared spectroscopy, NIRS). The results indicated that both younger and older adults exhibited a quadratic relationship between neuronal and hemodynamic effects, with reduced increases of the hemodynamic response at high levels of neuronal activity. Although older adults showed reduced activation, similar neurovascular coupling functions were observed in the two age groups when fMRI and deoxy-hemoglobin measures were used. However, the coupling between oxy- and deoxy-hemoglobin changes decreased with age and increased with increasing fitness. These data indicate that departures from linearity in neurovascular coupling may be present when using hemodynamic measures to study neuronal function. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    PubMed Central

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  8. Biotransformations of anticancer ruthenium(III) complexes: an X-ray absorption spectroscopic study.

    PubMed

    Levina, Aviva; Aitken, Jade B; Gwee, Yee Yen; Lim, Zhi Jun; Liu, Mimi; Singharay, Anannya Mitra; Wong, Pok Fai; Lay, Peter A

    2013-03-11

    An anti-metastatic drug, NAMI-A ((ImH)[Ru(III) Cl4 (Im)(dmso)]; Im=imidazole, dmso=S-bound dimethylsulfoxide), and a cytotoxic drug, KP1019 ((IndH)[Ru(III) Cl4 (Ind)2 ]; Ind=indazole), are two Ru-based anticancer drugs in human clinical trials. Their reactivities under biologically relevant conditions, including aqueous buffers, protein solutions or gels (e.g, albumin, transferrin and collagen), undiluted blood serum, cell-culture medium and human liver (HepG2) cancer cells, were studied by Ru K-edge X-ray absorption spectroscopy (XAS). These XAS data were fitted from linear combinations of spectra of well-characterised Ru compounds. The absence of XAS data from the parent drugs in these fits points to profound changes in the coordination environments of Ru(III) . The fits point to the presence of Ru(IV/III) clusters and binding of Ru(III) to S-donor groups, amine/imine and carboxylato groups of proteins. Cellular uptake of KP1019 is approximately 20-fold higher than that of NAMI-A under the same conditions, but it diminishes drastically after the decomposition of KP1019 in cell-culture media, which indicate that the parent complex is taken in by cells through passive diffusion. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omori, Y.; Chown, R.; Simard, G.

    We present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and \\emph{Planck} temperature data. The 150 GHz temperature data from themore » $$2500\\ {\\rm deg}^{2}$$ SPT-SZ survey is combined with the \\emph{Planck} 143 GHz data in harmonic space, to obtain a temperature map that has a broader $$\\ell$$ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potential $$C_{L}^{\\phi\\phi}$$, and compare it to the theoretical prediction for a $$\\Lambda$$CDM cosmology consistent with the \\emph{Planck} 2015 data set, finding a best-fit amplitude of $$0.95_{-0.06}^{+0.06}({\\rm Stat.})\\! _{-0.01}^{+0.01}({\\rm Sys.})$$. The null hypothesis of no lensing is rejected at a significance of $$24\\,\\sigma$$. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $$C_{L}^{\\phi G}$$, between the SPT+\\emph{Planck} lensing map and Wide-field Infrared Survey Explorer (\\emph{WISE}) galaxies. We fit $$C_{L}^{\\phi G}$$ to a power law of the form $$p_{L}=a(L/L_{0})^{-b}$$ with $$a=2.15 \\times 10^{-8}$$, $b=1.35$, $$L_{0}=490$$, and find $$\\eta^{\\phi G}=0.94^{+0.04}_{-0.04}$$, which is marginally lower, but in good agreement with $$\\eta^{\\phi G}=1.00^{+0.02}_{-0.01}$$, the best-fit amplitude for the cross-correlation of \\emph{Planck}-2015 CMB lensing and \\emph{WISE} galaxies over $$\\sim67\\%$$ of the sky. The lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey (DES), whose footprint nearly completely covers the SPT $$2500\\ {\\rm deg}^2$$ field.« less

  10. A 2500 deg 2 CMB Lensing Map from Combined South Pole Telescope and Planck Data

    DOE PAGES

    Omori, Y.; Chown, R.; Simard, G.; ...

    2017-11-07

    We present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and \\emph{Planck} temperature data. The 150 GHz temperature data from themore » $$2500\\ {\\rm deg}^{2}$$ SPT-SZ survey is combined with the \\emph{Planck} 143 GHz data in harmonic space, to obtain a temperature map that has a broader $$\\ell$$ coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potential $$C_{L}^{\\phi\\phi}$$, and compare it to the theoretical prediction for a $$\\Lambda$$CDM cosmology consistent with the \\emph{Planck} 2015 data set, finding a best-fit amplitude of $$0.95_{-0.06}^{+0.06}({\\rm Stat.})\\! _{-0.01}^{+0.01}({\\rm Sys.})$$. The null hypothesis of no lensing is rejected at a significance of $$24\\,\\sigma$$. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, $$C_{L}^{\\phi G}$$, between the SPT+\\emph{Planck} lensing map and Wide-field Infrared Survey Explorer (\\emph{WISE}) galaxies. We fit $$C_{L}^{\\phi G}$$ to a power law of the form $$p_{L}=a(L/L_{0})^{-b}$$ with $$a=2.15 \\times 10^{-8}$$, $b=1.35$, $$L_{0}=490$$, and find $$\\eta^{\\phi G}=0.94^{+0.04}_{-0.04}$$, which is marginally lower, but in good agreement with $$\\eta^{\\phi G}=1.00^{+0.02}_{-0.01}$$, the best-fit amplitude for the cross-correlation of \\emph{Planck}-2015 CMB lensing and \\emph{WISE} galaxies over $$\\sim67\\%$$ of the sky. The lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey (DES), whose footprint nearly completely covers the SPT $$2500\\ {\\rm deg}^2$$ field.« less

  11. Isotherm investigation for the sorption of fluoride onto Bio-F: comparison of linear and non-linear regression method

    NASA Astrophysics Data System (ADS)

    Yadav, Manish; Singh, Nitin Kumar

    2017-12-01

    A comparison of the linear and non-linear regression method in selecting the optimum isotherm among three most commonly used adsorption isotherms (Langmuir, Freundlich, and Redlich-Peterson) was made to the experimental data of fluoride (F) sorption onto Bio-F at a solution temperature of 30 ± 1 °C. The coefficient of correlation (r2) was used to select the best theoretical isotherm among the investigated ones. A total of four Langmuir linear equations were discussed and out of which linear form of most popular Langmuir-1 and Langmuir-2 showed the higher coefficient of determination (0.976 and 0.989) as compared to other Langmuir linear equations. Freundlich and Redlich-Peterson isotherms showed a better fit to the experimental data in linear least-square method, while in non-linear method Redlich-Peterson isotherm equations showed the best fit to the tested data set. The present study showed that the non-linear method could be a better way to obtain the isotherm parameters and represent the most suitable isotherm. Redlich-Peterson isotherm was found to be the best representative (r2 = 0.999) for this sorption system. It is also observed that the values of β are not close to unity, which means the isotherms are approaching the Freundlich but not the Langmuir isotherm.

  12. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  13. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  14. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  15. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  16. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE PAGES

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  17. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  18. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  19. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  20. Development of non-linear models predicting daily fine particle concentrations using aerosol optical depth retrievals and ground-based measurements at a municipality in the Brazilian Amazon region

    NASA Astrophysics Data System (ADS)

    Gonçalves, Karen dos Santos; Winkler, Mirko S.; Benchimol-Barbosa, Paulo Roberto; de Hoogh, Kees; Artaxo, Paulo Eduardo; de Souza Hacon, Sandra; Schindler, Christian; Künzli, Nino

    2018-07-01

    Epidemiological studies generally use particulate matter measurements with diameter less 2.5 μm (PM2.5) from monitoring networks. Satellite aerosol optical depth (AOD) data has considerable potential in predicting PM2.5 concentrations, and thus provides an alternative method for producing knowledge regarding the level of pollution and its health impact in areas where no ground PM2.5 measurements are available. This is the case in the Brazilian Amazon rainforest region where forest fires are frequent sources of high pollution. In this study, we applied a non-linear model for predicting PM2.5 concentration from AOD retrievals using interaction terms between average temperature, relative humidity, sine, cosine of date in a period of 365,25 days and the square of the lagged relative residual. Regression performance statistics were tested comparing the goodness of fit and R2 based on results from linear regression and non-linear regression for six different models. The regression results for non-linear prediction showed the best performance, explaining on average 82% of the daily PM2.5 concentrations when considering the whole period studied. In the context of Amazonia, it was the first study predicting PM2.5 concentrations using the latest high-resolution AOD products also in combination with the testing of a non-linear model performance. Our results permitted a reliable prediction considering the AOD-PM2.5 relationship and set the basis for further investigations on air pollution impacts in the complex context of Brazilian Amazon Region.

  1. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  2. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  3. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  4. Multi-site Observations of Pulsation in the Accreting White Dwarf SDSS J161033.64-010223.3 (V386 Ser)

    NASA Astrophysics Data System (ADS)

    Mukadam, Anjum S.; Townsley, D. M.; Gänsicke, B. T.; Szkody, P.; Marsh, T. R.; Robinson, E. L.; Bildsten, L.; Aungwerojwit, A.; Schreiber, M. R.; Southworth, J.; Schwope, A.; For, B.-Q.; Tovmassian, G.; Zharikov, S. V.; Hidas, M. G.; Baliber, N.; Brown, T.; Woudt, P. A.; Warner, B.; O'Donoghue, D.; Buckley, D. A. H.; Sefako, R.; Sion, E. M.

    2010-05-01

    Non-radial pulsations in the primary white dwarfs of cataclysmic variables can now potentially allow us to explore the stellar interior of these accretors using stellar seismology. In this context, we conducted a multi-site campaign on the accreting pulsator SDSS J161033.64-010223.3 (V386 Ser) using seven observatories located around the world in 2007 May over a duration of 11 days. We report the best-fit periodicities here, which were also previously observed in 2004, suggesting their underlying stability. Although we did not uncover a sufficient number of independent pulsation modes for a unique seismological fit, our campaign revealed that the dominant pulsation mode at 609 s is an evenly spaced triplet. The even nature of the triplet is suggestive of rotational splitting, implying an enigmatic rotation period of about 4.8 days. There are two viable alternatives assuming the triplet is real: either the period of 4.8 days is representative of the rotation period of the entire star with implications for the angular momentum evolution of these systems, or it is perhaps an indication of differential rotation with a fast rotating exterior and slow rotation deeper in the star. Investigating the possibility that a changing period could mimic a triplet suggests that this scenario is improbable, but not impossible. Using time-series spectra acquired in 2009 May, we determine the orbital period of SDSS J161033.64-010223.3 to be 83.8 ± 2.9 minutes. Three of the observed photometric frequencies from our 2007 May campaign appear to be linear combinations of the 609 s pulsation mode with the first harmonic of the orbital period at 41.5 minutes. This is the first discovery of a linear combination between non-radial pulsation and orbital motion for a variable white dwarf.

  5. Identifying intervals of temporally invariant field-aligned currents from Swarm: Assessing the validity of single-spacecraft methods

    NASA Astrophysics Data System (ADS)

    Forsyth, C.; Rae, I. J.; Mann, I. R.; Pakhotin, I. P.

    2017-03-01

    Field-aligned currents (FACs) are a fundamental component of coupled solar wind-magnetosphere-ionosphere. By assuming that FACs can be approximated by stationary infinite current sheets that do not change on the spacecraft crossing time, single-spacecraft magnetic field measurements can be used to estimate the currents flowing in space. By combining data from multiple spacecraft on similar orbits, these stationarity assumptions can be tested. In this technical report, we present a new technique that combines cross correlation and linear fitting of multiple spacecraft measurements to determine the reliability of the FAC estimates. We show that this technique can identify those intervals in which the currents estimated from single-spacecraft techniques are both well correlated and have similar amplitudes, thus meeting the spatial and temporal stationarity requirements. Using data from European Space Agency's Swarm mission from 2014 to 2015, we show that larger-scale currents (>450 km) are well correlated and have a one-to-one fit up to 50% of the time, whereas small-scale (<50 km) currents show similar amplitudes only 1% of the time despite there being a good correlation 18% of the time. It is thus imperative to examine both the correlation and amplitude of the calculated FACs in order to assess both the validity of the underlying assumptions and hence ultimately the reliability of such single-spacecraft FAC estimates.

  6. Faraday rotation at low frequencies: magnetoionic material of the large FRII radio galaxy PKS J0636-2036

    NASA Astrophysics Data System (ADS)

    O'Sullivan, S. P.; Lenc, E.; Anderson, C. S.; Gaensler, B. M.; Murphy, T.

    2018-04-01

    We present a low-frequency, broad-band polarization study of the FRII radio galaxy PKS J0636-2036 (z = 0.0551), using the Murchison Widefield Array (MWA) from 70 to 230 MHz. The northern and southern hotspots (separated by ˜14.5 arcmin on the sky) are resolved by the MWA (3.3 arcmin resolution) and both are detected in linear polarization across the full frequency range. A combination of Faraday rotation measure (RM) synthesis and broad-band polarization model fitting is used to constrain the Faraday depolarization properties of the source. For the integrated southern hotspot emission, two-RM-component models are strongly favoured over a single RM component, and the best-fitting model requires Faraday dispersions of approximately 0.7 and 1.2 rad m-2 (with a mean RM of ˜50 rad m-2). High-resolution imaging at 5 arcsec with the Australia Telescope Compact Array shows significant sub-structure in the southern hotspot and highlights some of the limitations in the polarization modelling of the MWA data. Based on the observed depolarization, combined with extrapolations of gas density scaling relations for group environments, we estimate magnetic field strengths in the intergalactic medium between ˜0.04 and 0.5 μG. We also comment on future prospects of detecting more polarized sources at low frequencies.

  7. Experimental vibration damping characteristics of the third-stage rotor of a three-stage transonic axial-flow compressor

    NASA Technical Reports Server (NTRS)

    Newman, Frederick A.

    1988-01-01

    Rotor blade aerodynamic damping is experimentally determined in a three-stage transonic axial flow compressor having design aerodynamic performance goals of 4.5:1 pressure ratio and 65.5 lbm/sec weight flow. The combined damping associated with each mode is determined by a least squares fit of a single degree of freedom system transfer function to the nonsynchronous portion of the rotor blade strain gage output power spectra. The combined damping consists of the aerodynanmic damping and the structural and mechanical damping. The aerodynamic damping varies linearly with the inlet total pressure for a given corrected speed, weight flow, and pressure ratio while the structural and mechanical damping is assumed to remain constant. The combined damping is determined at three inlet total pressure levels to obtain the aerodynamic damping. The third-stage rotor blade aerodynamic damping is presented and discussed for the design equivalent speed with the stator blades reset for maximum efficiency. The compressor overall performance and experimental Campbell diagrams for the third-stage rotor blade row are also presented.

  8. Experimental Determination of Aerodynamic Damping in a Three-Stage Transonic Axial-Flow Compressor. Degree awarded by Case Western Reserve Univ.

    NASA Technical Reports Server (NTRS)

    Newman, Frederick A.

    1988-01-01

    Rotor blade aerodynamic damping is experimentally determined in a three-stage transonic axial flow compressor having design aerodynamic performance goals of 4.5:1 pressure ratio and 65.5 lbm/sec weight flow. The combined damping associated with each mode is determined by a least squares fit of a single degree of freedom system transfer function to the nonsynchronous portion of the rotor blade strain gauge output power spectra. The combined damping consists of aerodynamic and structural and mechanical damping. The aerodynamic damping varies linearly with the inlet total pressure for a given equivalent speed, equivalent mass flow, and pressure ratio while structural and mechanical damping are assumed to be constant. The combined damping is determined at three inlet total pressure levels to obtain the aerodynamic damping. The third stage rotor blade aerodynamic damping is presented and discussed for 70, 80, 90, and 100 percent design equivalent speed. The compressor overall performance and experimental Campbell diagrams for the third stage rotor blade row are also presented.

  9. Experimental Vibration Damping Characteristics of the Third-stage Rotor of a Three-stage Transonic Axial-flow Compressor

    NASA Technical Reports Server (NTRS)

    Newman, Frederick A.

    1988-01-01

    Rotor blade aerodynamic damping is experimentally determined in a three-stage transonic axial flow compressor having design aerodynamic performance goals of 4.5:1 pressure ratio and 65.5 lbm/sec weight flow. The combined damping associated with each mode is determined by a least squares fit of a single degree of freedom system transfer function to the nonsynchronous portion of the rotor blade strain gage output power spectra. The combined damping consists of the aerodynamic damping and the structural and mechanical damping. The aerodynamic damping varies linearly with the inlet total pressure for a given corrected speed, weight flow, and pressure ratio while the structural and mechanical damping is assumed to remain constant. The combined damping is determined at three inlet total pressure levels to obtain the aerodynamic damping. The third-stage rotor blade aerodynamic damping is presented and discussed for the design equivalent speed with the stator blades reset for maximum efficiency. The compressor overall preformance and experimental Campbell diagrams for the third-stage rotor blade row are also presented.

  10. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.

    PubMed

    Park, Hyoung-Jun; Song, Minho

    2008-10-29

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.

  11. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data.

    PubMed

    Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R

    2017-03-01

    We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  12. X-Ray Absorption Near Edge Structure And Extended X-Ray Absorption Fine Structure Analysis of Standards And Biological Samples Containing Mixed Oxidation States of Chromium(III) And Chromium(VI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, J.G.; Dokken, K.; Peralta-Videa, J.R.

    For the first time a method has been developed for the extended X-ray absorption fine structure (EXAFS) data analyses of biological samples containing multiple oxidation states of chromium. In this study, the first shell coordination and interatomic distances based on the data analysis of known standards of potassium chromate (Cr(VI)) and chromium nitrate hexahydrate (Cr(III)) were investigated. The standards examined were mixtures of the following molar ratios of Cr(VI):Cr(III), 0:1, 0.25:0.75, 0.5:0.5, 0.75:0.25, and 1:0. It was determined from the calibration data that the fitting error associated with linear combination X-ray absorption near edge structure (LC-XANES) fittings was approximately {+-}10%more » of the total fitting. The peak height of the Cr(VI) pre-edge feature after normalization of the X-ray absorption (XAS) spectra was used to prepare a calibration curve. The EXAFS fittings of the standards were also investigated and fittings to lechuguilla biomass samples laden with different ratios of Cr(III) and Cr(VI) were performed as well. An excellent agreement between the XANES data and the data presented in the EXAFS spectra was observed. The EXFAS data also presented mean coordination numbers directly related to the ratios of the different chromium oxidation states in the sample. The chromium oxygen interactions had two different bond lengths at approximately 1.68 and 1.98 {angstrom} for the Cr(VI) and Cr(III) in the sample, respectively.« less

  13. Using web search query data to monitor dengue epidemics: a new model for neglected tropical disease surveillance.

    PubMed

    Chan, Emily H; Sahai, Vikram; Conrad, Corrie; Brownstein, John S

    2011-05-01

    A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003-2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance.

  14. Probing primordial features with future galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballardini, M.; Fedeli, C.; Moscardini, L.

    2016-10-01

    We study the capability of future measurements of the galaxy clustering power spectrum to probe departures from a power-law spectrum for primordial fluctuations. On considering the information from the galaxy clustering power spectrum up to quasi-linear scales, i.e. k < 0.1 h Mpc{sup −1}, we present forecasts for DESI, Euclid and SPHEREx in combination with CMB measurements. As examples of departures in the primordial power spectrum from a simple power-law, we consider four Planck 2015 best-fits motivated by inflationary models with different breaking of the slow-roll approximation. At present, these four representative models provide an improved fit to CMB temperaturemore » anisotropies, although not at statistical significant level. As for other extensions in the matter content of the simplest ΛCDM model, the complementarity of the information in the resulting matter power spectrum expected from these galaxy surveys and in the primordial power spectrum from CMB anisotropies can be effective in constraining cosmological models. We find that the three galaxy surveys can add significant information to CMB to better constrain the extra parameters of the four models considered.« less

  15. Compositional Models of Glass/Melt Properties and their Use for Glass Formulation

    DOE PAGES

    Vienna, John D.; USA, Richland Washington

    2014-12-18

    Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less

  16. Multivariate meta-analysis using individual participant data.

    PubMed

    Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R

    2015-06-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  17. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  18. Effects of Physical Activity Intervention on Motor Proficiency and Physical Fitness in Children With ADHD: An Exploratory Study.

    PubMed

    Pan, Chien-Yu; Chang, Yu-Kai; Tsai, Chia-Liang; Chu, Chia-Hua; Cheng, Yun-Wen; Sung, Ming-Chih

    2017-07-01

    This study explored how a 12-week simulated developmental horse-riding program (SDHRP) combined with fitness training influenced the motor proficiency and physical fitness of children with ADHD. Twelve children with ADHD received the intervention, whereas 12 children with ADHD and 24 typically developing (TD) children did not. The fitness levels and motor skills of the participants were assessed using standardized tests before and after the 12-week training program. Significant improvements were observed in the motor proficiency, cardiovascular fitness, and flexibility of the ADHD training group following the intervention. Children with ADHD exhibit low levels of motor proficiency and cardiovascular fitness; thus, using the combined 12-week SDHRP and fitness training positively affected children with ADHD.

  19. Paper-cutting operations using scissors in Drury's law tasks.

    PubMed

    Yamanaka, Shota; Miyashita, Homei

    2018-05-01

    Human performance modeling is a core topic in ergonomics. In addition to deriving models, it is important to verify the kinds of tasks that can be modeled. Drury's law is promising for path tracking tasks such as navigating a path with pens or driving a car. We conducted an experiment based on the observation that paper-cutting tasks using scissors resemble such tasks. The results showed that cutting arc-like paths (1/4 of a circle) showed an excellent fit with Drury's law (R 2  > 0.98), whereas cutting linear paths showed a worse fit (R 2  > 0.87). Since linear paths yielded better fits when path amplitudes were divided (R 2  > 0.99 for all amplitudes), we discuss the characteristics of paper-cutting operations using scissors. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Assessing Competencies Needed to Engage With Digital Health Services: Development of the eHealth Literacy Assessment Toolkit.

    PubMed

    Karnoe, Astrid; Furstrand, Dorthe; Christensen, Karl Bang; Norgaard, Ole; Kayser, Lars

    2018-05-10

    To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user's eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes. The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals' health literacy and digital literacy using a mix of existing and newly developed scales. From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: "functional health literacy," tool 2: "health literacy self-assessment," tool 3: "familiarity with health and health care," and tool 4: "knowledge of health and disease") and 3 digitally-related tools (tool 5: "technology familiarity," tool 6: "technology confidence," and tool 7: "incentives for engaging with technology") that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity. Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90. The eHLA consists of 7 short, robust scales that assess individual's knowledge and skills related to digital literacy and health literacy. ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.

  1. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  2. Study of the anticorrelations between ozone and UV-B radiation using linear and exponential fits in Southern Brazil

    NASA Astrophysics Data System (ADS)

    Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.

    Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.

  3. Conceptualization of the Sexual Response Models in Men: Are there Differences Between Sexually Functional and Dysfunctional Men?

    PubMed

    Connaughton, Catherine; McCabe, Marita; Karantzas, Gery

    2016-03-01

    Research to validate models of sexual response empirically in men with and without sexual dysfunction (MSD), as currently defined, is limited. To explore the extent to which the traditional linear or the Basson circular model best represents male sexual response for men with MSD and sexually functional men. In total, 573 men completed an online questionnaire to assess sexual function and aspects of the models of sexual response. In total, 42.2% of men (242) were sexually functional, and 57.8% (331) had at least one MSD. Models were built and tested using bootstrapping and structural equation modeling. Fit of models for men with and without MSD. The linear model and the initial circular model were a poor fit for men with and without MSD. A modified version of the circular model demonstrated adequate fit for the two groups and showed important interactions between psychological factors and sexual response for men with and without MSD. Male sexual response was not represented by the linear model for men with or without MSD, excluding possible healthy responsive desire. The circular model provided a better fit for the two groups of men but demonstrated that the relations between psychological factors and phases of sexual response were different for men with and without MSD as currently defined. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  4. Dynamical properties of maps fitted to data in the noise-free limit

    PubMed Central

    Lindström, Torsten

    2013-01-01

    We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079

  5. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  6. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  7. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  8. 40 CFR 89.319 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...

  9. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  10. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  11. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  12. 40 CFR 89.320 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...

  13. An empirical test of the relative and combined effects of land-cover and climate change on local colonization and extinction.

    PubMed

    Yalcin, Semra; Leroux, Shawn James

    2018-04-14

    Land-cover and climate change are two main drivers of changes in species ranges. Yet, the majority of studies investigating the impacts of global change on biodiversity focus on one global change driver and usually use simulations to project biodiversity responses to future conditions. We conduct an empirical test of the relative and combined effects of land-cover and climate change on species occurrence changes. Specifically, we examine whether observed local colonization and extinctions of North American birds between 1981-1985 and 2001-2005 are correlated with land-cover and climate change and whether bird life history and ecological traits explain interspecific variation in observed occurrence changes. We fit logistic regression models to test the impact of physical land-cover change, changes in net primary productivity, winter precipitation, mean summer temperature, and mean winter temperature on the probability of Ontario breeding bird local colonization and extinction. Models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local colonization for 30%, 27%, and 29% of species, respectively. Conversely, models with climate change, land-cover change, and the combination of these two drivers were the top ranked models of local extinction for 61%, 7%, and 9% of species, respectively. The quantitative impacts of land-cover and climate change variables also vary among bird species. We then fit linear regression models to test whether the variation in regional colonization and extinction rate could be explained by mean body mass, migratory strategy, and habitat preference of birds. Overall, species traits were weakly correlated with heterogeneity in species occurrence changes. We provide empirical evidence showing that land-cover change, climate change, and the combination of multiple global change drivers can differentially explain observed species local colonization and extinction. © 2018 John Wiley & Sons Ltd.

  14. Optimizing methods for linking cinematic features to fMRI data.

    PubMed

    Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia

    2015-04-15

    One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.

  15. Single π+ electroproduction on the proton in the first and second resonance regions at 0.25GeV2

    NASA Astrophysics Data System (ADS)

    Egiyan, H.; Aznauryan, I. G.; Burkert, V. D.; Griffioen, K. A.; Joo, K.; Minehart, R.; Smith, L. C.; Adams, G.; Ambrozewicz, P.; Anciant, E.; Anghinolfi, M.; Asavapibhop, B.; Audit, G.; Auger, T.; Avakian, H.; Bagdasaryan, H.; Ball, J. P.; Baltzel, N.; Barrow, S.; Battaglieri, M.; Beard, K.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Bianchi, N.; Biselli, A. S.; Boiarinov, S.; Bonner, B. E.; Bouchigny, S.; Bradford, R.; Branford, D.; Briscoe, W. J.; Brooks, W. K.; Butuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Cetina, C.; Chen, S.; Cole, P. L.; Coleman, A.; Cords, D.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Desanctis, E.; Devita, R.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Dharmawardane, K. V.; Djalali, C.; Dodge, G. E.; Donnely, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Eckhause, M.; Egiyan, K. S.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feldman, G.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gaff, S. J.; Gai, M.; Gavalian, G.; Gilad, S.; Gilfoyle, G. P.; Giovanetti, K. L.; Girard, P.; Goetz, G. T.; Gordon, C. I.; Gothe, R.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hakobyan, R. S.; Hardie, J.; Heddle, D.; Hersman, F. W.; Hicks, K.; Hicks, R. S.; Hleiqawi, I.; Holtrop, M.; Hu, J.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B.; Ito, M. M.; Jenkins, D.; Juengst, H. G.; Kelley, J. H.; Kellie, J. D.; Khandaker, M.; Kim, D. H.; Kim, K. Y.; Kim, K.; Kim, M. S.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A. V.; Klusman, M.; Kossov, M.; Kramer, L. H.; Kuang, Y.; Kubarovsky, V.; Kuhn, S. E.; Kuhn, J.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Li, Ji; Livingston, K.; Longhi, A.; Lukashin, K.; Manak, J. J.; Marchand, C.; McAleer, S.; McKinnon, B.; McNabb, J. W.; Mecking, B. A.; Mehrabyan, S.; Melone, J. J.; Mestayer, M. D.; Meyer, C. A.; Mikhailov, K.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morand, L.; Morrow, S. A.; Muccifora, V.; Mueller, J.; Murphy, L. Y.; Mutchler, G. S.; Napolitano, J.; Nasseripour, R.; Nelson, S. O.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; Nozar, M.; O'Rielly, G. V.; Osipenko, M.; Park, K.; Pasyuk, E.; Peterson, G.; Philips, S. A.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Sabourov, K.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Sargsyan, M.; Schumacher, R. A.; Serov, V. S.; Shafi, A.; Sharabian, Y. G.; Shaw, J.; Simionatto, S.; Skabelin, A. V.; Smith, E. S.; Sober, D. I.; Spraker, M.; Stavinsky, A.; Stepanyan, S.; Stoler, P.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thoma, U.; Thompson, R.; Tkabladze, A.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Wang, K.; Weinstein, L. B.; Weller, H.; Weygand, D. P.; Whisnant, C. S.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zhang, J.; Zhao, J.; Zhou, Z.

    2006-02-01

    The ep→e'π+n reaction was studied in the first and second nucleon resonance regions in the 0.25 GeV2

  16. Fitting of the Thomson scattering density and temperature profiles on the COMPASS tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stefanikova, E.; Division of Fusion Plasma Physics, KTH Royal Institute of Technology, SE-10691 Stockholm; Peterka, M.

    2016-11-15

    A new technique for fitting the full radial profiles of electron density and temperature obtained by the Thomson scattering diagnostic in H-mode discharges on the COMPASS tokamak is described. The technique combines the conventionally used modified hyperbolic tangent function for the edge transport barrier (pedestal) fitting and a modification of a Gaussian function for fitting the core plasma. Low number of parameters of this combined function and their straightforward interpretability and controllability provide a robust method for obtaining physically reasonable profile fits. Deconvolution with the diagnostic instrument function is applied on the profile fit, taking into account the dependence onmore » the actual magnetic configuration.« less

  17. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  18. Modelling the isometric force response to multiple pulse stimuli in locust skeletal muscle.

    PubMed

    Wilson, Emma; Rustighi, Emiliano; Mace, Brian R; Newland, Philip L

    2011-02-01

    An improved model of locust skeletal muscle will inform on the general behaviour of invertebrate and mammalian muscle with the eventual aim of improving biomedical models of human muscles, embracing prosthetic construction and muscle therapy. In this article, the isometric response of the locust hind leg extensor muscle to input pulse trains is investigated. Experimental data was collected by stimulating the muscle directly and measuring the force at the tibia. The responses to constant frequency stimulus trains of various frequencies and number of pulses were decomposed into the response to each individual stimulus. Each individual pulse response was then fitted to a model, it being assumed that the response to each pulse could be approximated as an impulse response and was linear, no assumption were made about the model order. When the interpulse frequency (IPF) was low and the number of pulses in the train small, a second-order model provided a good fit to each pulse. For moderate IPF or for long pulse trains a linear third-order model provided a better fit to the response to each pulse. The fit using a second-order model deteriorated with increasing IPF. When the input comprised higher IPFs with a large number of pulses the assumptions that the response was linear could not be confirmed. A generalised model is also presented. This model is second-order, and contains two nonlinear terms. The model is able to capture the force response to a range of inputs. This includes cases where the input comprised of higher frequency pulse trains and the assumption of quasi-linear behaviour could not be confirmed.

  19. Improvements in Calibration and Analysis of the CTBT-relevant Radioxenon Isotopes with High Resolution SiPIN-based Electron Detectors

    NASA Astrophysics Data System (ADS)

    Khrustalev, K.

    2016-12-01

    Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.

  20. Comparison of aerobic capacity in annually certified and uncertified volunteer firefighters.

    PubMed

    Hammer, Rodney L; Heath, Edward M

    2013-05-01

    The leading cause of mortality among firefighters has been cardiac arrest precipitated by stress and overexertion with volunteer firefighters having double the death rate from this cause compared with career firefighters. In an attempt to reduce on-duty sudden cardiac deaths, annual fitness testing, and certification, has been widely instigated in wildland firefighters, who have half the cardiac arrest death rate of structural firefighters. The hypothesis was that annual fitness testing would serve as motivation to produce higher cardiorespiratory fitness. This study compared predicted aerobic capacity in annually certified and uncertified volunteer firefighters. Each firefighter performed a submaximal treadmill test to predict V[Combining Dot Above]O2max. Certified volunteer firefighters, who participated in annual fitness testing, had a predicted V[Combining Dot Above]O2max of 39.9 ± 8.4 ml·kg·min. Uncertified volunteer firefighters had a predicted V[Combining Dot Above]O2max of 37.8 ± 8.5 ml·kg·min. Annual fitness testing during the certification process did not contribute to statistically higher (F2,78 = 0.627, p = 0.431) V[Combining Dot Above]O2max levels in certified volunteer firefighters. Although there was no significant difference in predicted V[Combining Dot Above]O2max values for certified and uncertified volunteer firefighters, it was reported that 30% of volunteer firefighters had predicted aerobic capacities below the recommended minimum V[Combining Dot Above]O2max level of 33.5 ml·kg·min. Current annual fitness testing for volunteer firefighters does not seem to be effective. Thus, the study emphasizes the need of a higher priority for firefighter fitness programs to best ensure the safety of firefighters and the public.

  1. Impact of kerogen heterogeneity on sorption of organic pollutants. 2. Sorption equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.; Yu, Z.Q.; Xiao, B.H.

    2009-08-15

    Phenanthrene and naphthalene sorption isotherms were measured for three different series of kerogen materials using completely mixed batch reactors. Sorption isotherms were nonlinear for each sorbate-sorbent system, and the Freundlich isotherm equation fit the sorption data well. The Freundlich isotherm linearity parameter n ranged from 0.192 to 0.729 for phenanthrene and from 0.389 to 0.731 for naphthalene. The n values correlated linearly with rigidity and aromaticity of the kerogen matrix, but the single-point, organic carbon-normalized distribution coefficients varied dramatically among the tested sorbents. A dual-mode sorption equation consisting of a linear partitioning domain and a Langmuir adsorption domain adequately quantifiedmore » the overall sorption equilibrium for each sorbent-sorbate system. Both models fit the data well, with r{sup 2} values of 0.965 to 0.996 for the Freundlich model and 0.963 to 0.997 for the dual-mode model for the phenanthrene sorption isotherms. The dual-mode model fitting results showed that as the rigidity and aromaticity of the kerogen matrix increased, the contribution of the linear partitioning domain to the overall sorption equilibrium decreased, whereas the contribution of the Langmuir adsorption domain increased. The present study suggested that kerogen materials found in soils and sediments should not be treated as a single, unified, carbonaceous sorbent phase.« less

  2. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  3. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  4. Quantitative analysis of crystalline pharmaceuticals in powders and tablets by a pattern-fitting procedure using X-ray powder diffraction data.

    PubMed

    Yamamura, S; Momose, Y

    2001-01-16

    A pattern-fitting procedure for quantitative analysis of crystalline pharmaceuticals in solid dosage forms using X-ray powder diffraction data is described. This method is based on a procedure for pattern-fitting in crystal structure refinement, and observed X-ray scattering intensities were fitted to analytical expressions including some fitting parameters, i.e. scale factor, peak positions, peak widths and degree of preferred orientation of the crystallites. All fitting parameters were optimized by the non-linear least-squares procedure. Then the weight fraction of each component was determined from the optimized scale factors. In the present study, well-crystallized binary systems, zinc oxide-zinc sulfide (ZnO-ZnS) and salicylic acid-benzoic acid (SA-BA), were used as the samples. In analysis of the ZnO-ZnS system, the weight fraction of ZnO or ZnS could be determined quantitatively in the range of 5-95% in the case of both powders and tablets. In analysis of the SA-BA systems, the weight fraction of SA or BA could be determined quantitatively in the range of 20-80% in the case of both powders and tablets. Quantitative analysis applying this pattern-fitting procedure showed better reproducibility than other X-ray methods based on the linear or integral intensities of particular diffraction peaks. Analysis using this pattern-fitting procedure also has the advantage that the preferred orientation of the crystallites in solid dosage forms can be also determined in the course of quantitative analysis.

  5. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast, multichannel WDRC often leads to poor music quality, whereas linear processing or slow WDRC are generally preferred. Furthermore, the effect of WDRC is more important for music preferences than music-industry CL applied to signals before the hearing-aid input stage. Variability in hearing-aid users' perceptions of music quality may be partially explained by frequency resolution abilities.

  6. Modelling the 3D post-seismic deformation signal of the Maule 2010 earthquake: Viscosity heterogeneity or non-linear creep?

    NASA Astrophysics Data System (ADS)

    Peña, C.; Heidbach, O.; Moreno, M.; Li, S.; Bedford, J. R.; Oncken, O.

    2017-12-01

    The surface deformation associated with the 2010 Mw 8.8 Maule earthquake, Chile was recorded in great detail before, during and after the event. The quality of the post-seismic continuous GPS time series has facilitated a number of studies that have modelled the horizontal signal with a combination of after-slip and viscoelastic relaxation using linear Newtonian rheology. Li et al. (2017, GRL), one of the first studies that also looked into the details of the vertical post-seismic signal, showed that a homogeneous viscosity structure cannot well explain the vertical signal, but that with a heterogeneous viscosity distribution producing a better fit. It is, however, difficult to argue why viscous rock properties should change significantly with distance to the trench. Thus, here we investigate if a non-linear, strain-rate dependent power-law can fit the post-seismic signal in all three components - in particular the vertical one. We use the first 6 years of post-seismic cGPS data and investigate with a 2D geomechanical-numerical model along a profile at 36°S if non-linear creep can explain the deformation signal as well using reasonable rock properties and a temperature field derived for this region from Springer (1999). The 2D model geometry considers the slab as well as the Moho geometry. Our results show that with our model the post-seismic surface deformation signal can be reproduced as well as in the study of Li et al. (2017). These findings suggest that the largest deformations are produced by dislocation creep. Such a process would take place below the Andes ( 40 km depth) at the interface between the deeper, colder crust and the olivine-rich upper mantle, where the lowest effective viscosity results from the relaxation of tensional stresses imposed by the co-seismic displacement. Additionally, we present preliminary results from a 3D geomechanical-numerical model with the same rheology that provides more details of the post-seismic deformation especially along strike the subduction zone.

  7. Combined Iron Deficiency and Low Aerobic Fitness Doubly Burden Academic Performance among Women Attending University.

    PubMed

    Scott, Samuel P; De Souza, Mary Jane; Koehler, Karsten; Murray-Kolb, Laura E

    2017-01-01

    Academic success is a key determinant of future prospects for students. Cognitive functioning has been related to nutritional and physical factors. Here, we focus on iron status and aerobic fitness in young-adult female students given the high rate of iron deficiency and declines in fitness reported in this population. We sought to explore the combined effects of iron status and fitness on academic success and to determine whether these associations are mediated by cognitive performance. Women (n = 105) aged 18-35 y were recruited for this cross-sectional study. Data were obtained for iron biomarkers, peak oxygen uptake (VO 2peak ), grade point average (GPA), performance on computerized attention and memory tasks, and motivation and parental occupation. We compared the GPA of groups 1) with low compared with normal iron status, 2) among different fitness levels, and 3) by using a combined iron status and fitness designation. Mediation analysis was applied to determine whether iron status and VO 2peak influence GPA through attentional and mnemonic function. After controlling for age, parental occupation, and motivation, GPA was higher in women with normal compared with low ferritin (3.66 ± 0.06 compared with 3.39 ± 0.06; P = 0.01). In analyses of combined effects of iron status and fitness, GPA was higher in women with normal ferritin and higher fitness (3.70 ± 0.08) than in those with 1) low ferritin and lower fitness (3.36 ± 0.08; P = 0.02) and 2) low ferritin and higher fitness (3.44 ± 0.09; P = 0.04). Path analysis revealed that working memory mediated the association between VO 2peak and GPA. Low iron stores and low aerobic fitness may prevent female college students from achieving their full academic potential. Investigators should explore whether integrated lifestyle interventions targeting nutritional status and fitness can benefit cognitive function, academic success, and postgraduate prospects. © 2017 American Society for Nutrition.

  8. Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques

    EPA Science Inventory

    Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...

  9. Induction of Chromosomal Aberrations at Fluences of Less Than One HZE Particle per Cell Nucleus

    NASA Technical Reports Server (NTRS)

    Hada, Megumi; Chappell, Lori J.; Wang, Minli; George, Kerry A.; Cucinotta, Francis A.

    2014-01-01

    The assumption of a linear dose response used to describe the biological effects of high LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high energy and charge (HZE) nuclei. Human fibroblast and lymphocyte cells where irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with O (77 keV/ (long-s)m), Si (99 keV/ (long-s)m), Fe (175 keV/ (long-s)m), Fe (195 keV/ (long-s)m) or Fe (240 keV/ (long-s)m) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Non-linear regression models were used to evaluate possible linear and non-linear dose response models based on these data. Dose responses for simple exchanges for human fibroblast irradiated under confluent culture conditions were best fit by non-linear models motivated by a non-targeted effect (NTE). Best fits for the dose response data for human lymphocytes irradiated in blood tubes were a NTE model for O and a linear response model fit best for Si and Fe particles. Additional evidence for NTE were found in low dose experiments measuring gamma-H2AX foci, a marker of double strand breaks (DSB), and split-dose experiments with human fibroblasts. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high LET radiation at the relevant range of low doses.

  10. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  11. Canine cancer screening via ultraviolet absorbance and fluorescence spectroscopy of serum proteins

    NASA Astrophysics Data System (ADS)

    Dickerson, Bryan D.; Geist, Brian L.; Spillman, William B., Jr.; Robertson, John L.

    2007-11-01

    A cost-effective optical cancer screening and monitoring technique was demonstrated in a pilot study of canine serum samples and was patented for commercialization. Compared to conventional blood chemistry analysis methods, more accurate estimations of the concentrations of albumin, globulins, and hemoglobin in serum were obtained by fitting the near UV absorbance and photoluminescence spectra of diluted serum as a linear combination of component reference spectra. Tracking these serum proteins over the course of treatment helped to monitor patient immune response to carcinoma and therapy. For cancer screening, 70% of dogs with clinical presentation of cancer displayed suppressed serum hemoglobin levels (below 20 mg/dL) in combination with atypical serum protein compositions, that is, albumin levels outside of a safe range (from 4 to 8 g/dL) and globulin levels above or below a more normal range (from 1.7 to 3.7 g/dL). Of the dogs that met these criteria, only 20% were given a false positive label by this cancer screening test.

  12. Manipulating motions of targeted single cells in solution by an integrated double-ring magnetic tweezers imaging microscope.

    PubMed

    Wu, Meiling; Yadav, Rajeev; Pal, Nibedita; Lu, H Peter

    2017-07-01

    Controlling and manipulating living cell motions in solution hold a high promise in developing new biotechnology and biological science. Here, we developed a magnetic tweezers device that employs a combination of two permanent magnets in up-down double-ring configuration axially fitting with a microscopic objective, allowing a picoNewton (pN) bidirectional force and motion control on the sample beyond a single upward pulling direction. The experimental force calibration and magnetic field simulation using finite element method magnetics demonstrate that the designed magnetic tweezers covers a linear-combined pN force with positive-negative polarization changes in a tenability of sub-pN scale, which can be utilized to further achieve motion manipulation by shifting the force balance. We demonstrate an application of the up-down double-ring magnetic tweezers for single cell manipulation, showing that the cells with internalized paramagnetic beads can be selectively picked up and guided in a controlled fine motion.

  13. X-ray-induced apoptosis of BEL-7402 cell line enhanced by extremely low frequency electromagnetic field in vitro.

    PubMed

    Jian, Wen; Wei, Zhao; Zhiqiang, Cheng; Zheng, Fang

    2009-02-01

    This study was designed to test whether extremely low frequency electromagnetic field (ELF-EMF) could enhance the apoptosis-induction effect of X-ray radiotherapy on liver cancer cell line BEL-7402 in vitro. EMF exposure was performed inside an energized solenoid coil. X-ray irradiation was performed using a linear accelerator. Apoptosis rates of BEL-7402 cells were analyzed using Annexin V-Fit Apoptosis Detection kit. Apoptosis rates of EMF group and sham EMF group were compared when combined with X-ray irradiation. Our results suggested that the apoptosis rate of BEL-7402 cells exposed to low doses of X-ray irradiation could be significantly increased by EMF. More EMF exposures obtain significantly higher apoptosis rates than fewer EMF exposures when combined with 2 Gy X-ray irradiation. These findings suggested that ELF-EMF could augment the cell apoptosis effects of low doses of X-ray irradiation on BEL-7402 cells in a synergistic and cumulative way. Copyright 2008 Wiley-Liss, Inc.

  14. Three-Dimensional Mapping of Microenvironmental Control of Methyl Rotational Barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hembree, William I; Baudry, Jerome Y

    2011-01-01

    Sterical (van der Waals-induced) rotational barriers of methyl groups are investigated theoretically, using ab initio and empirical force field calculations, for various three-dimensional microenvironmental conditions around the methyl group rotator of a model neopentane molecule. The destabilization (reducing methyl rotational barriers) or stabilization (increasing methyl rotational barriers) of the staggered conformation of the methyl rotator depends on a combination of microenvironmental contributions from (i) the number of atoms around the rotator, (ii) the distance between the rotator and the microenvironmental atoms, and (iii) the dihedral angle between the stator, rotator, and molecular environment around the rotator. These geometrical criteria combinemore » their respective effects in a linearly additive fashion, with no apparent cooperative effects, and their combination in space around a rotator may increase, decrease, or leave the rotator s rotational barrier unmodified. This is exemplified in a geometrical analysis of the alanine dipeptide crystal where microenvironmental effects on methyl rotators barrier of rotation fit the geometrical mapping described in the neopentane model.« less

  15. Manipulating motions of targeted single cells in solution by an integrated double-ring magnetic tweezers imaging microscope

    NASA Astrophysics Data System (ADS)

    Wu, Meiling; Yadav, Rajeev; Pal, Nibedita; Lu, H. Peter

    2017-07-01

    Controlling and manipulating living cell motions in solution hold a high promise in developing new biotechnology and biological science. Here, we developed a magnetic tweezers device that employs a combination of two permanent magnets in up-down double-ring configuration axially fitting with a microscopic objective, allowing a picoNewton (pN) bidirectional force and motion control on the sample beyond a single upward pulling direction. The experimental force calibration and magnetic field simulation using finite element method magnetics demonstrate that the designed magnetic tweezers covers a linear-combined pN force with positive-negative polarization changes in a tenability of sub-pN scale, which can be utilized to further achieve motion manipulation by shifting the force balance. We demonstrate an application of the up-down double-ring magnetic tweezers for single cell manipulation, showing that the cells with internalized paramagnetic beads can be selectively picked up and guided in a controlled fine motion.

  16. Prediction of cancer incidence and mortality in Korea, 2014.

    PubMed

    Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Oh, Chang-Mo; Lee, Duk Hyoung; Lee, Jin Soo

    2014-04-01

    We studied and reported on cancer incidence and mortality rates as projected for the year 2014 in order to estimate Korea's current cancer burden. Cancer incidence data from 1999 to 2011 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2012 were acquired from Statistics Korea. Cancer incidence in 2014 was projected by fitting a linear regression model to observed age-specific cancer incidence rates against observed years, then multiplying the projected age-specific rates by the age-specific population. For cancer mortality, a similar procedure was employed, except that a Joinpoint regression model was used to determine at which year the linear trend changed significantly. A total of 265,813 new cancer cases and 74,981 cancer deaths are expected to occur in Korea in 2014. Further, the crude incidence rate per 100,000 of all sites combined will likely reach 524.7 and the age-standardized incidence rate, 338.5. Meanwhile, the crude mortality rate of all sites combined and age-standardized rate are projected to be 148.0 and 84.6, respectively. Given the rapid rise in prostate cancer cases, it is anticipated to be the fourth most frequently occurring cancer site in men for the first time. Cancer has become the most prominent public health concern in Korea, and as the population ages, the nation's cancer burden will continue to increase.

  17. Evaluation of a method for enhancing interaural level differences at low frequencies.

    PubMed

    Moore, Brian C J; Kolarik, Andrew; Stone, Michael A; Lee, Young-Woo

    2016-10-01

    A method (called binaural enhancement) for enhancing interaural level differences at low frequencies, based on estimates of interaural time differences, was developed and evaluated. Five conditions were compared, all using simulated hearing-aid processing: (1) Linear amplification with frequency-response shaping; (2) binaural enhancement combined with linear amplification and frequency-response shaping; (3) slow-acting four-channel amplitude compression with independent compression at the two ears (AGC4CH); (4) binaural enhancement combined with four-channel compression (BE-AGC4CH); and (5) four-channel compression but with the compression gains synchronized across ears. Ten hearing-impaired listeners were tested, and gains and compression ratios for each listener were set to match targets prescribed by the CAM2 fitting method. Stimuli were presented via headphones, using virtualization methods to simulate listening in a moderately reverberant room. The intelligibility of speech at ±60° azimuth in the presence of competing speech on the opposite side of the head at ±60° azimuth was not affected by the binaural enhancement processing. Sound localization was significantly better for condition BE-AGC4CH than for condition AGC4CH for a sentence, but not for broadband noise, lowpass noise, or lowpass amplitude-modulated noise. The results suggest that the binaural enhancement processing can improve localization for sounds with distinct envelope fluctuations.

  18. Whorled hairless nevus of the scalp, linear hyperpigmentation, and telangiectatic nevi of the lower limbs: a novel variant of the "phacomatosis complex".

    PubMed

    Castori, Marco; Scarciolla, Oronzo; Morlino, Silvia; Manente, Liborio; Biscaglia, Assunta; Fragasso, Alberto; Grammatico, Paola

    2012-02-01

    The term "phacomatosis" refers to a growing number of sporadic genetic skin disorders characterized by the combination of two or more different nevi and possibly resulting from non-allelic twin spotting. While phacomatosis pigmentovascularis (PPV) and pigmentokeratotica represent the most common patterns, some patients do not fit with either condition and are temporarily classified as unique phenotypes. We report on an 8-year-old boy with striking right hemihypoplasia, resulting in limb asymmetry and fixed dislocation of right hip. Skin on the affected side showed three distinct nevi: (i) A whorled, hairless nevus of the scalp in close proximity with (ii) epidermal hyperpigmentation following lines of Blaschko on the neck and right upper limb, and (iii) multiple telangiectatic nevi of the right lower limb and hemiscrotum. Didymosis atricho-melanotica was proposed for the combination of adjacent patchy congenital alopecia and linear hyperpigmentation, while phacomatosis atricho-pigmento-vascularis appears to define the entire cutaneous phenotype, thus implying the involvement of three neighboring loci influencing the development of distinct constituents of the skin. Given the striking asymmetry of the observed phenotype, the effect of mosaicism (either genomic or functional) for a mutation in a single gene with pleiotropic action and influenced by the lateralization pattern of early development cannot be excluded. Copyright © 2012 Wiley Periodicals, Inc.

  19. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1988-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  20. A New Metrics for Countries' Fitness and Products' Complexity

    NASA Astrophysics Data System (ADS)

    Tacchella, Andrea; Cristelli, Matthieu; Caldarelli, Guido; Gabrielli, Andrea; Pietronero, Luciano

    2012-10-01

    Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.

  1. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    NASA Technical Reports Server (NTRS)

    Murphy, K. A.

    1990-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  2. Validation of the Neurological Fatigue Index for stroke (NFI-Stroke)

    PubMed Central

    2012-01-01

    Background Fatigue is a common symptom in Stroke. Several self-report scales are available to measure this debilitating symptom but concern has been expressed about their construct validity. Objective To examine the reliability and validity of a recently developed scale for multiple sclerosis (MS) fatigue, the Neurological Fatigue Index (NFI-MS), in a sample of stroke patients. Method Six patients with stroke participated in qualitative interviews which were analysed and the themes compared for equivalence to those derived from existing data on MS fatigue. 999 questionnaire packs were sent to those with a stroke within the past four years. Data from the four subscales, and the Summary scale of the NFI-MS were fitted to the Rasch measurement model. Results Themes identified by stroke patients were consistent with those identified by those with MS. 282 questionnaires were returned and respondents had a mean age of 67.3 years; 62% were male, and were on average 17.2 (SD 11.4, range 2–50) months post stroke. The Physical, Cognitive and Summary scales all showed good fit to the model, were unidimensional, and free of differential item functioning by age, sex and time. The sleep scales failed to show adequate fit in their current format. Conclusion Post stroke fatigue appears to be represented by a combination of physical and cognitive components, confirmed by both qualitative and quantitative processes. The NFI-Stroke, comprising a Physical and Cognitive subscale, and a 10-item Summary scale, meets the strictest measurement requirements. Fit to the Rasch model allows conversion of ordinal raw scores to a linear metric. PMID:22587411

  3. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  4. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  5. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  6. Prevalence and correlates of resistance training skill competence in adolescents.

    PubMed

    Smith, Jordan J; DeMarco, Matthew; Kennedy, Sarah G; Kelson, Mark; Barnett, Lisa M; Faigenbaum, Avery D; Lubans, David R

    2018-06-01

    The aim of this study is to examine the prevalence and correlates of adolescents' resistance training (RT) skill competence. Participants were 548 adolescents (14.1 ± 0.5 years) from 16 schools in New South Wales, Australia. RT skills were assessed using the Resistance Training Skills Battery. Demographics, BMI, muscular fitness, perceived strength, RT self-efficacy, and motivation for RT were also assessed. The proportion demonstrating "competence" and "near competence" in each of the six RT skills were calculated and sex differences explored. Associations between the combined RT skill score and potential correlates were examined using multi-level linear mixed models. Overall, the prevalence of competence was low (range = 3.3% to 27.9%). Females outperformed males on the squat, lunge and overhead press, whereas males performed better on the push-up (p < .05). Significant associations were seen for a number of correlates, which largely differed by sex. Muscular fitness was moderately and positively associated with RT skills among both males (β = 0.34, 95%CIs = 0.23 to 0.46) and females (β = 0.36, 95%CIs = 0.23 to 0.48). Our findings support a link between RT skills and muscular fitness. Other associations were statistically significant but small in magnitude, and should therefore be interpreted cautiously.

  7. A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line

    NASA Technical Reports Server (NTRS)

    Otoshi, Tom Y.

    1994-01-01

    A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.

  8. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  9. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  10. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  11. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  12. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  13. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  14. 40 CFR 86.123-78 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-squares best-fit straight line is 2 percent or less of the value at each data point, concentration values... percent at any point, the best-fit non-linear equation which represents the data to within 2 percent of... may be necessary to clean the analyzer frequently to prevent interference with NOX measurements (see...

  15. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  16. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  17. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  18. 40 CFR 86.1324-84 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent or less of the... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (d) The initial and periodic interference, system check...

  19. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  20. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  1. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  2. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  3. 40 CFR 90.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chemiluminescent oxides of nitrogen analyzer as described in this section. (b) Initial and Periodic Interference...-squares best-fit straight line is two percent or less of the value at each data point, calculate... at any point, use the best-fit non-linear equation which represents the data to within two percent of...

  4. 40 CFR 91.318 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nitrogen analyzer as described in this section. (b) Initial and periodic interference. Prior to its...-squares best-fit straight line is two percent or less of the value at each data point, concentration... two percent at any point, use the best-fit non-linear equation which represents the data to within two...

  5. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  6. 40 CFR 86.1323-84 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calibrated, if the deviation from a least-squares best-fit straight line is within ±2 percent of the value at... exceeds these limits, then the best-fit non-linear equation which represents the data within these limits shall be used to determine concentration values. (c) The initial and periodic interference, system check...

  7. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  8. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  9. 40 CFR 89.322 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...

  10. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  11. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  12. An in-situ Raman study on pristane at high pressure and ambient temperature

    NASA Astrophysics Data System (ADS)

    Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei

    2018-01-01

    The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.

  13. Revision of laser-induced damage threshold evaluation from damage probability data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas

    2013-04-15

    In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less

  14. Solving for source parameters using nested array data: A case study from the Canterbury, New Zealand earthquake sequence

    USGS Publications Warehouse

    Neighbors, Corrie; Cochran, Elizabeth S.; Ryan, Kenneth; Kaiser, Anna E.

    2017-01-01

    The seismic spectrum can be constructed by assuming a Brune spectral model and estimating the parameters of seismic moment (M0), corner frequency (fc), and high-frequency site attenuation (κ). Using seismic data collected during the 2010–2011 Canterbury, New Zealand, earthquake sequence, we apply the non-linear least-squares Gauss–Newton method, a deterministic downhill optimization technique, to simultaneously determine the M0, fc, and κ for each event-station pair. We fit the Brune spectral acceleration model to Fourier-transformed S-wave records following application of path and site corrections to the data. For each event, we solve for a single M0 and fc, while any remaining residual kappa, κr">κrκr, is allowed to differ per station record to reflect varying high-frequency falloff due to path and site attenuation. We use a parametric forward modeling method, calculating initial M0 and fc values from the local GNS New Zealand catalog Mw, GNS magnitudes and measuring an initial κr">κrκr using an automated high-frequency linear regression method. Final solutions for M0, fc, and κr">κrκr are iteratively computed through minimization of the residual function, and the Brune model stress drop is then calculated from the final, best-fit fc. We perform the spectral fitting routine on nested array seismic data that include the permanent GeoNet accelerometer network as well as a dense network of nearly 200 Quake Catcher Network (QCN) MEMs accelerometers, analyzing over 180 aftershocks Mw,GNS ≥ 3.5 that occurred from 9 September 2010 to 31 July 2011. QCN stations were hosted by public volunteers and served to fill spatial gaps between existing GeoNet stations. Moment magnitudes determined using the spectral fitting procedure (Mw,SF) range from 3.5 to 5.7 and agree well with Mw,GNS, with a median difference of 0.09 and 0.17 for GeoNet and QCN records, respectively, and 0.11 when data from both networks are combined. The majority of events are calculated to have stress drops between 1.7 and 13 MPa (20th and 80th percentile, correspondingly) for the combined networks. The overall median stress drop for the combined networks is 3.2 MPa, which is similar to median stress drops previously reported for the Canterbury sequence. We do not observe a correlation between stress drop and depth for this region, nor a relationship between stress drop and magnitude over the catalog considered. Lateral spatial patterns in stress drop, such as a cluster of aftershocks near the eastern extent of the Greendale fault with higher stress drops and lower stress drops for aftershocks of the 2011 Mw,GNS 6.2 Christchurch mainshock, are found to be in agreement with previous reports. As stress drop is arguably a method-dependent calculation and subject to high spatial variability, our results using the parametric Gauss–Newton algorithm strengthen conclusions that the Canterbury sequence has stress drops that are more similar to those found in intraplate regions, with overall higher stress drops that are typically observed in tectonically active areas.

  15. PyFDAP: automated analysis of fluorescence decay after photoconversion (FDAP) experiments.

    PubMed

    Bläßle, Alexander; Müller, Patrick

    2015-03-15

    We developed the graphical user interface PyFDAP for the fitting of linear and non-linear decay functions to data from fluorescence decay after photoconversion (FDAP) experiments. PyFDAP structures and analyses large FDAP datasets and features multiple fitting and plotting options. PyFDAP was written in Python and runs on Ubuntu Linux, Mac OS X and Microsoft Windows operating systems. The software, a user guide and a test FDAP dataset are freely available for download from http://people.tuebingen.mpg.de/mueller-lab. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference

    PubMed Central

    Park, Hyoung-Jun; Song, Minho

    2008-01-01

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898

  17. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  18. Numerical simulation of a relaxation test designed to fit a quasi-linear viscoelastic model for temporomandibular joint discs.

    PubMed

    Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime

    2013-02-01

    The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller areas glued are better); and (c) the variation in the thickness of the specimen. The specimen's faces should be parallel to ensure a uniaxial stress state. However, this is not possible in real specimens, and a criterion must be defined to accept the specimen in terms of the specimen's thickness variation and the deviation of the fitted constants arising from such a variation.

  19. Prediction of the sorption capacities and affinities of organic chemicals by XAD-7.

    PubMed

    Yang, Kun; Qi, Long; Wei, Wei; Wu, Wenhao; Lin, Daohui

    2016-01-01

    Macro-porous resins are widely used as adsorbents for the treatment of organic contaminants in wastewater and for the pre-concentration of organic solutes from water. However, the sorption mechanisms for organic contaminants on such adsorbents have not been systematically investigated so far. Therefore, in this study, the sorption capacities and affinities of 24 organic chemicals by XAD-7 were investigated and the experimentally obtained sorption isotherms were fitted to the Dubinin-Ashtakhov model. Linear positive correlations were observed between the sorption capacities and the solubilities (SW) of the chemicals in water or octanol and between the sorption affinities and the solvatochromic parameters of the chemicals, indicating that the sorption of various organic compounds by XAD-7 occurred by non-linear partitioning into XAD-7, rather than by adsorption on XAD-7 surfaces. Both specific interactions (i.e., hydrogen-bonding interactions) as well as nonspecific interactions were considered to be responsible for the non-linear partitioning. The correlation equations obtained in this study allow the prediction of non-linear partitioning using well-known chemical parameters, namely SW, octanol-water partition coefficients (KOW), and the hydrogen-bonding donor parameter (αm). The effect of pH on the sorption of ionizable organic compounds (IOCs) could also be predicted by combining the correlation equations with additional equations developed from the estimation of IOC dissociation rates. The prediction equations developed in this study and the proposed non-linear partition mechanism shed new light on the selective removal and pre-concentration of organic solutes from water and on the regeneration of exhausted XAD-7 using solvent extraction.

  20. Modeling of thermal degradation kinetics of the C-glucosyl xanthone mangiferin in an aqueous model solution as a function of pH and temperature and protective effect of honeybush extract matrix.

    PubMed

    Beelders, Theresa; de Beer, Dalene; Kidd, Martin; Joubert, Elizabeth

    2018-01-01

    Mangiferin, a C-glucosyl xanthone, abundant in mango and honeybush, is increasingly targeted for its bioactive properties and thus to enhance functional properties of food. The thermal degradation kinetics of mangiferin at pH3, 4, 5, 6 and 7 were each modeled at five temperatures ranging between 60 and 140°C. First-order reaction models were fitted to the data using non-linear regression to determine the reaction rate constant at each pH-temperature combination. The reaction rate constant increased with increasing temperature and pH. Comparison of the reaction rate constants at 100°C revealed an exponential relationship between the reaction rate constant and pH. The data for each pH were also modeled with the Arrhenius equation using non-linear and linear regression to determine the activation energy and pre-exponential factor. Activation energies decreased slightly with increasing pH. Finally, a multi-linear model taking into account both temperature and pH was developed for mangiferin degradation. Sterilization (121°C for 4min) of honeybush extracts dissolved at pH4, 5 and 7 did not cause noticeable degradation of mangiferin, although the multi-linear model predicted 34% degradation at pH7. The extract matrix is postulated to exert a protective effect as changes in potential precursor content could not fully explain the stability of mangiferin. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The relationship between health-related fitness and quality of life in postmenopausal women from Southern Taiwan

    PubMed Central

    Hsu, Wei-Hsiu; Chen, Chi-lung; Kuo, Liang Tseng; Fan, Chun-Hao; Lee, Mel S; Hsu, Robert Wen-Wei

    2014-01-01

    Background Health-related fitness has been reported to be associated with improved quality of life (QoL) in the elderly. Health-related fitness is comprised of several dimensions that could be enhanced by specific training regimens. It has remained unclear how various dimensions of health-related fitness interact with QoL in postmenopausal women. Objective The purpose of the current study was to investigate the relationship between the dimensions of health-related fitness and QoL in elderly women. Methods A cohort of 408 postmenopausal women in a rural area of Taiwan was prospectively collected. Dimensions of health-related fitness, consisting of muscular strength, balance, cardiorespiratory endurance, flexibility, muscle endurance, and agility, were assessed. QoL was determined using the Short Form Health Survey (SF-36). Differences between age groups (stratified by decades) were calculated using a one-way analysis of variance (ANOVA) and multiple comparisons using a Scheffé test. A Spearman’s correlation analysis was performed to examine differences between QoL and each dimension of fitness. Multiple linear regression with forced-entry procedure was performed to evaluate the effects of health-related fitness. A P-value of <0.05 was considered statistically significant. Results Age-related decreases in health-related fitness were shown for sit-ups, back strength, grip strength, side steps, trunk extension, and agility (P<0.05). An age-related decrease in QoL, specifically in physical functioning, role limitation due to physical problems, and physical component score, was also demonstrated (P<0.05). Multiple linear regression analyses demonstrated that back strength significantly contributed to the physical component of QoL (adjusted beta of 0.268 [P<0.05]). Conclusion Back strength was positively correlated with the physical component of QoL among the examined dimensions of health-related fitness. Health-related fitness, as well as the physical component of QoL, declined with increasing age. PMID:25258526

  2. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  3. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  4. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  5. Dependency of Tearing Mode Stability on Current and Pressure Profiles in DIII-D Hybrid Discharges

    NASA Astrophysics Data System (ADS)

    Kim, K.; Park, J. M.; Murakami, M.; La Haye, R. J.; Na, Y.-S.; SNU/ORAU; ORNL; Atomics, General; SNU; DIII-D Team

    2016-10-01

    Understanding the physics of the onset and evolution of tearing modes (TMs) in tokamak plasmas is important for high- β steady-state operation. Based on DIII-D steady-state hybrid experiments with accurate equilibrium reconstruction and well-measured plasma profiles, the 2/1 tearing mode can be more stable with increasing local current and pressure gradient at rational surface and with lower pressure peaking and plasma inductance. The tearing stability index Δ', estimated by the Rutherford equation with experimental mode growth rate was validated against Δ' calculated by linear eigenvalue solver (PEST3); preliminary comprehensive MHD modeling by NIMROD reproduced the TM onset reasonably well. We present a novel integrated modeling for the purpose of predicting TM onset in experiment by combining a model equilibrium reconstruction using IPS/FASTRAN, linear stability Δ' calculation using PEST3, and fitting formula for critical Δ' from NIMROD. Work supported in part by the US DoE under DE-AC05-06OR23100, DE-AC05-00OR22725, and DEFC02-04ER54698.

  6. Skyshine radiation resulting from 6 MV and 10 MV photon beams from a medical accelerator.

    PubMed

    Elder, Deirdre H; Harmon, Joseph F; Borak, Thomas B

    2010-07-01

    Skyshine radiation scattered in the atmosphere above a radiation therapy accelerator facility can result in measurable dose rates at locations near the facility on the ground and at roof level. A Reuter Stokes RSS-120 pressurized ion chamber was used to measure exposure rates in the vicinity of a Varian Trilogy Linear Accelerator at the Colorado State University Veterinary Medical Center. The linear accelerator was used to deliver bremsstrahlung photons from 6 MeV and 10 MeV electron beams with several combinations of field sizes and gantry angles. An equation for modeling skyshine radiation in the vicinity of medical accelerators was published by the National Council on Radiation Protection and Measurements in 2005. However, this model did not provide a good fit to the observed dose rates at ground level or on the roof. A more accurate method of estimating skyshine may be to measure the exposure rate of the radiation exiting the roof of the facility and to scale the results using the graphs presented in this paper.

  7. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences.

    PubMed

    Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P

    2016-03-01

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.

  8. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  9. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  10. Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers

    ERIC Educational Resources Information Center

    Whyte, Jemma Catherine; Bull, Rebecca

    2008-01-01

    The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…

  11. A Battery Test to Evaluate Life-Time Physical Fitness With Same Test Items.

    ERIC Educational Resources Information Center

    Meshizuka, Tetsuo

    A combination of physical fitness tests designed to be administered to a wide spectrum of the population, male and female, children and adults, is described. Three tests are included in this battery--motor fitness, physical fitness, and sports fitness. The philosophy behind this test structure is that motor fitness tests only measure and indicate…

  12. A simplified competition data analysis for radioligand specific activity determination.

    PubMed

    Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A

    1990-01-01

    Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.

  13. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael

    2011-01-01

    This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…

  14. The added value of eye-tracking in diagnosing dyscalculia: a case study

    PubMed Central

    van Viersen, Sietske; Slot, Esther M.; Kroesbergen, Evelyn H.; van't Noordende, Jaccoline E.; Leseman, Paul P. M.

    2013-01-01

    The present study compared eye movements and performance of a 9-year-old girl with Developmental Dyscalculia (DD) on a series of number line tasks to those of a group of typically developing (TD) children (n = 10), in order to answer the question whether eye-tracking data from number line estimation tasks can be a useful tool to discriminate between TD children and children with a number processing deficit. Quantitative results indicated that the child with dyscalculia performed worse on all symbolic number line tasks compared to the control group, indicated by a low linear fit (R2) and a low accuracy measured by mean percent absolute error. In contrast to the control group, her magnitude representations seemed to be better represented by a logarithmic than a linear fit. Furthermore, qualitative analyses on the data of the child with dyscalculia revealed more unidentifiable fixation patterns in the processing of multi-digit numbers and more dysfunctional estimation strategy use in one third of the estimation trials as opposed to ~10% in the control group. In line with her dyscalculia diagnosis, these results confirm the difficulties with spatially representing and manipulating numerosities on a number line, resulting in inflexible and inadequate estimation or processing strategies. It can be concluded from this case study that eye-tracking data can be used to discern different number processing and estimation strategies in TD children and children with a number processing deficit. Hence, eye-tracking data in combination with number line estimation tasks might be a valuable and promising addition to current diagnostic measures. PMID:24098294

  15. Raman micro-spectroscopy analysis of human lens epithelial cells exposed to a low-dose-range of ionizing radiation.

    PubMed

    Allen, Christian Harry; Kumar, Achint; Qutob, Sami; Nyiri, Balazs; Chauhan, Vinita; Murugkar, Sangeeta

    2018-01-09

    Recent findings in populations exposed to ionizing radiation (IR) indicate dose-related lens opacification occurs at much lower doses (<2 Gy) than indicated in radiation protection guidelines. As a result, research efforts are now being directed towards identifying early predictors of lens degeneration resulting in cataractogenesis. In this study, Raman micro-spectroscopy was used to investigate the effects of varying doses of radiation, ranging from 0.01 Gy to 5 Gy, on human lens epithelial (HLE) cells which were chemically fixed 24 h post-irradiation. Raman spectra were acquired from the nucleus and cytoplasm of the HLE cells. Spectra were collected from points in a 3  ×  3 grid pattern and then averaged. The raw spectra were preprocessed and principal component analysis followed by linear discriminant analysis was used to discriminate between dose and control for 0.25, 0.5, 2, and 5 Gy. Using leave-one-out cross-validation accuracies of greater than 74% were attained for each dose/control combination. The ultra-low doses 0.01 and 0.05 Gy were included in an analysis of band intensities for Raman bands found to be significant in the linear discrimination, and an induced repair model survival curve was fit to a band-difference-ratio plot of this data, suggesting HLE cells undergo a nonlinear response to low-doses of IR. A survival curve was also fit to clonogenic assay data done on the irradiated HLE cells, showing a similar nonlinear response.

  16. The added value of eye-tracking in diagnosing dyscalculia: a case study.

    PubMed

    van Viersen, Sietske; Slot, Esther M; Kroesbergen, Evelyn H; Van't Noordende, Jaccoline E; Leseman, Paul P M

    2013-01-01

    The present study compared eye movements and performance of a 9-year-old girl with Developmental Dyscalculia (DD) on a series of number line tasks to those of a group of typically developing (TD) children (n = 10), in order to answer the question whether eye-tracking data from number line estimation tasks can be a useful tool to discriminate between TD children and children with a number processing deficit. Quantitative results indicated that the child with dyscalculia performed worse on all symbolic number line tasks compared to the control group, indicated by a low linear fit (R (2)) and a low accuracy measured by mean percent absolute error. In contrast to the control group, her magnitude representations seemed to be better represented by a logarithmic than a linear fit. Furthermore, qualitative analyses on the data of the child with dyscalculia revealed more unidentifiable fixation patterns in the processing of multi-digit numbers and more dysfunctional estimation strategy use in one third of the estimation trials as opposed to ~10% in the control group. In line with her dyscalculia diagnosis, these results confirm the difficulties with spatially representing and manipulating numerosities on a number line, resulting in inflexible and inadequate estimation or processing strategies. It can be concluded from this case study that eye-tracking data can be used to discern different number processing and estimation strategies in TD children and children with a number processing deficit. Hence, eye-tracking data in combination with number line estimation tasks might be a valuable and promising addition to current diagnostic measures.

  17. Raman micro-spectroscopy analysis of human lens epithelial cells exposed to a low-dose-range of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Allen, Christian Harry; Kumar, Achint; Qutob, Sami; Nyiri, Balazs; Chauhan, Vinita; Murugkar, Sangeeta

    2018-01-01

    Recent findings in populations exposed to ionizing radiation (IR) indicate dose-related lens opacification occurs at much lower doses (<2 Gy) than indicated in radiation protection guidelines. As a result, research efforts are now being directed towards identifying early predictors of lens degeneration resulting in cataractogenesis. In this study, Raman micro-spectroscopy was used to investigate the effects of varying doses of radiation, ranging from 0.01 Gy to 5 Gy, on human lens epithelial (HLE) cells which were chemically fixed 24 h post-irradiation. Raman spectra were acquired from the nucleus and cytoplasm of the HLE cells. Spectra were collected from points in a 3  ×  3 grid pattern and then averaged. The raw spectra were preprocessed and principal component analysis followed by linear discriminant analysis was used to discriminate between dose and control for 0.25, 0.5, 2, and 5 Gy. Using leave-one-out cross-validation accuracies of greater than 74% were attained for each dose/control combination. The ultra-low doses 0.01 and 0.05 Gy were included in an analysis of band intensities for Raman bands found to be significant in the linear discrimination, and an induced repair model survival curve was fit to a band-difference-ratio plot of this data, suggesting HLE cells undergo a nonlinear response to low-doses of IR. A survival curve was also fit to clonogenic assay data done on the irradiated HLE cells, showing a similar nonlinear response.

  18. Characterizing L1-norm best-fit subspaces

    NASA Astrophysics Data System (ADS)

    Brooks, J. Paul; Dulá, José H.

    2017-05-01

    Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.

  19. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  20. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  1. Stress analysis method for clearance-fit joints with bearing-bypass loads

    NASA Technical Reports Server (NTRS)

    Naik, R. A.; Crews, J. H., Jr.

    1989-01-01

    Within a multi-fastener joint, fastener holes may be subjected to the combined effects of bearing loads and loads that bypass the hole to be reacted elsewhere in the joint. The analysis of a joint subjected to search combined bearing and bypass loads is complicated by the usual clearance between the hole and the fastener. A simple analysis method for such clearance-fit joints subjected to bearing-bypass loading has been developed in the present study. It uses an inverse formulation with a linear elastic finite-element analysis. Conditions along the bolt-hole contact arc are specified by displacement constraint equations. The present method is simple to apply and can be implemented with most general purpose finite-element programs since it does not use complicated iterative-incremental procedures. The method was used to study the effects of bearing-bypass loading on bolt-hole contact angles and local stresses. In this study, a rigid, frictionless bolt was used with a plate having the properties of a quasi-isotropic graphite/epoxy laminate. Results showed that the contact angle as well as the peak stresses around the hole and their locations were strongly influenced by the ratio of bearing and bypass loads. For single contact, tension and compression bearing-bypass loading had opposite effects on the contact angle. For some compressive bearing-bypass loads, the hole tended to close on the fastener leading to dual contact. It was shown that dual contact reduces the stress concentration at the fastener and would, therefore, increase joint strength in compression. The results illustrate the general importance of accounting for bolt-hole clearance and contact to accurately compute local bolt-hole stresses for combined bearings and bypass loading.

  2. Participant Adherence Indicators Predict Changes in Blood Pressure, Anthropometric Measures, and Self-Reported Physical Activity in a Lifestyle Intervention: HUB City Steps

    PubMed Central

    Thomson, Jessica L.; Landry, Alicia S.; Zoellner, Jamie M.; Connell, Carol; Madson, Michael B.; Molaison, Elaine Fontenot; Yadrick, Kathy

    2014-01-01

    The objective of this secondary analysis was to evaluate the utility of several participant adherence indicators for predicting changes in clinical, anthropometric, dietary, fitness, and physical activity (PA) outcomes in a lifestyle intervention, HUB City Steps, conducted in a southern, African American cohort in 2010. HUB City Steps was a 6 month, community engaged, multi component, non controlled, intervention targeting hypertension risk factors. Descriptive indicators were constructed using 2 participant adherence measures, education session attendance (ESA) and weekly steps/day pedometer diary submission (PDS), separately and in combination. Analyses, based on data from 269 primarily African American adult participants, included bivariate tests of association and multivariable linear regression to determine significant relationships between 7 adherence indicators and health outcome changes, including clinical, anthropometric, dietary, fitness, and PA measures. ESA indicators were significantly correlated with 4 health outcomes, body mass index (BMI), fat mass, low density lipoprotein (LDL), and PA ( .29≤ r ≤ .23; P<.05). PDS indicators were significantly correlated with PA (r=.27; P<.001). Combination ESA/PDS indicators were significantly correlated with 5 health outcomes, BMI, % body fat (%BF), fat mass, LDL, and PA (r= .26 to .29; P<.05). Results from the multivariate models indicated that the combination ESA/PDS indicators were the most significant predictors of changes for 5 outcomes, %BF, fat mass, LDL diastolic blood pressure (DBP), and PA, while ESA performed best for BMI only. For DBP, a 1 unit increase in the continuous categorical ESA/PDS indicator resulted in .3 mm Hg decrease. Implications for assessing participant adherence in community based, multi component lifestyle intervention research are discussed. PMID:24986913

  3. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Testing goodness of fit in regression: a general approach for specified alternatives.

    PubMed

    Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J

    2012-12-10

    When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting

    NASA Astrophysics Data System (ADS)

    Yan, Y. T.; Cai, Y.

    2006-03-01

    A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.

  6. PREdator: a python based GUI for data analysis, evaluation and fitting

    PubMed Central

    2014-01-01

    The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.

  7. Analytical Energy Dispersive X-Ray Fluorescence Measurements with a Scanty Amounts of Plant and Soil Materials

    NASA Astrophysics Data System (ADS)

    Mittal, R.; Rao, P.; Kaur, P.

    2018-01-01

    Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.

  8. Power-Law Template for IR Point Source Clustering

    NASA Technical Reports Server (NTRS)

    Addison, Graeme E.; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark; Halpern, Mark; Hincks, Adam; Hlozek, Renee; hide

    2011-01-01

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217,353,545 and 857 GHz, over angular scales 100 < I < 2200), the Balloonborne Large-Aperture Submillimeter Telescope (BLAST; 250, 350 and 500 microns; 1000 < I < 9000), and from correlating BLAST and Atacama Cosmology Telescope (ACT; 148 and 218 GHz) maps. We find that the clustered power over the range of angular scales and frequencies considered is well fit by a simple power law of the form C_l\\propto I(sup -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, nu(sup beta) B(nu,T_eff), with a single emissivity index beta = 2.20 +/- 0.07 and effective temperature T_eff= 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha_150-220 = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in Cosmic Microwave Background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  9. Remote sensing of PM2.5 from ground-based optical measurements

    NASA Astrophysics Data System (ADS)

    Li, S.; Joseph, E.; Min, Q.

    2014-12-01

    Remote sensing of particulate matter concentration with aerodynamic diameter smaller than 2.5 um(PM2.5) by using ground-based optical measurements of aerosols is investigated based on 6 years of hourly average measurements of aerosol optical properties, PM2.5, ceilometer backscatter coefficients and meteorological factors from Howard University Beltsville Campus facility (HUBC). The accuracy of quantitative retrieval of PM2.5 using aerosol optical depth (AOD) is limited due to changes in aerosol size distribution and vertical distribution. In this study, ceilometer backscatter coefficients are used to provide vertical information of aerosol. It is found that the PM2.5-AOD ratio can vary largely for different aerosol vertical distributions. The ratio is also sensitive to mode parameters of bimodal lognormal aerosol size distribution when the geometric mean radius for the fine mode is small. Using two Angstrom exponents calculated at three wavelengths of 415, 500, 860nm are found better representing aerosol size distributions than only using one Angstrom exponent. A regression model is proposed to assess the impacts of different factors on the retrieval of PM2.5. Compared to a simple linear regression model, the new model combining AOD and ceilometer backscatter can prominently improve the fitting of PM2.5. The contribution of further introducing Angstrom coefficients is apparent. Using combined measurements of AOD, ceilometer backscatter, Angstrom coefficients and meteorological parameters in the regression model can get a correlation coefficient of 0.79 between fitted and expected PM2.5.

  10. US EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS

    EPA Science Inventory

    The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...

  11. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  12. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  13. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  14. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  15. 40 CFR 89.321 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3... factor for that range. If the deviation exceeds these limits, the best-fit non-linear equation which... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be...

  16. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  17. Exponential Correlation of IQ and the Wealth of Nations

    ERIC Educational Resources Information Center

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  18. Age, Sex, and Body Composition as Predictors of Children's Performance on Basic Motor Abilities and Health-Related Fitness Items.

    ERIC Educational Resources Information Center

    Pissanos, Becky W.; And Others

    1983-01-01

    Step-wise linear regressions were used to relate children's age, sex, and body composition to performance on basic motor abilities including balance, speed, agility, power, coordination, and reaction time, and to health-related fitness items including flexibility, muscle strength and endurance and cardiovascular functions. Eighty subjects were in…

  19. "Getting Fit Basically Just Means, Like, Nonfat": Children's Lessons in Fitness and Fatness

    ERIC Educational Resources Information Center

    Powell, Darren; Fitzpatrick, Katie

    2015-01-01

    Current concerns about a childhood obesity crisis and children's physical activity levels have combined to justify fitness lessons as a physical education practice in New Zealand primary (elementary) schools. Researchers focused on children's understandings of fitness lessons argue that they construct fitness as a quest for an "ideal"…

  20. Calibrating the Decline Rate - Peak Luminosity Relation for Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Rust, Bert W.; Pruzhinskaya, Maria V.; Thijsse, Barend J.

    2015-08-01

    The correlation between peak luminosity and rate of decline in luminosity for Type I supernovae was first studied by B. W. Rust [Ph.D. thesis, Univ. of Illinois (1974) ORNL-4953] and Yu. P. Pskovskii [Sov. Astron., 21 (1977) 675] in the 1970s. Their work was little-noted until Phillips rediscovered the correlation in 1993 [ApJ, 413 (1993) L105] and attempted to derive a calibration relation using a difference quotient approximation Δm15(B) to the decline rate after peak luminosity Mmax(B). Numerical differentiation of data containing measuring errors is a notoriously unstable calculation, but Δm15(B) remains the parameter of choice for most calibration methods developed since 1993. To succeed, it should be computed from good functional fits to the lightcurves, but most workers never exhibit their fits. In the few instances where they have, the fits are not very good. Some of the 9 supernovae in the Phillips study required extinction corrections in their estimates of the Mmax(B), and so were not appropriate for establishing a calibration relation. Although the relative uncertainties in his Δm15(B) estimates were comparable to those in his Mmax(B) estimates, he nevertheless used simple linear regression of the latter on the former, rather than major-axis regression (total least squares) which would have been more appropriate.Here we determine some new calibration relations using a sample of nearby "pure" supernovae suggested by M. V. Pruzhinskaya [Astron. Lett., 37 (2011) 663]. Their parent galaxies are all in the NED collection, with good distance estimates obtained by several different methods. We fit each lightcurve with an optimal regression spline obtained by B. J. Thijsse's spline2 [Comp. in Sci. & Eng., 10 (2008) 49]. The fits, which explain more that 99% of the variance in each case, are better than anything heretofore obtained by stretching "template" lightcurves or fitting combinations of standard lightcurves. We use the fits to compute estimates of Δm15(B) and some other calibration parameters suggested by Pskovskii [Sov. Astron., 28 (1984) 858] and compare their utility for cosmological testing.

  1. SU-E-I-07: Response Characteristics and Signal Conversion Modeling of KV Flat-Panel Detector in Cone Beam CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yu; Cao, Ruifen; Pei, Xi

    2015-06-15

    Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less

  2. Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study

    PubMed Central

    2012-01-01

    Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034

  3. Hearing aid fitting for visual and hearing impaired patients with Usher syndrome type IIa.

    PubMed

    Hartel, B P; Agterberg, M J H; Snik, A F; Kunst, H P M; van Opstal, A J; Bosman, A J; Pennings, R J E

    2017-08-01

    Usher syndrome is the leading cause of hereditary deaf-blindness. Most patients with Usher syndrome type IIa start using hearing aids from a young age. A serious complaint refers to interference between sound localisation abilities and adaptive sound processing (compression), as present in today's hearing aids. The aim of this study was to investigate the effect of advanced signal processing on binaural hearing, including sound localisation. In this prospective study, patients were fitted with hearing aids with a nonlinear (compression) and linear amplification programs. Data logging was used to objectively evaluate the use of either program. Performance was evaluated with a speech-in-noise test, a sound localisation test and two questionnaires focussing on self-reported benefit. Data logging confirmed that the reported use of hearing aids was high. The linear program was used significantly more often (average use: 77%) than the nonlinear program (average use: 17%). The results for speech intelligibility in noise and sound localisation did not show a significant difference between type of amplification. However, the self-reported outcomes showed higher scores on 'ease of communication' and overall benefit, and significant lower scores on disability for the new hearing aids when compared to their previous hearing aids with compression amplification. Patients with Usher syndrome type IIa prefer a linear amplification over nonlinear amplification when fitted with novel hearing aids. Apart from a significantly higher logged use, no difference in speech in noise and sound localisation was observed between linear and nonlinear amplification with the currently used tests. Further research is needed to evaluate the reasons behind the preference for the linear settings. © 2016 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.

  4. Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Kang, I.

    2008-04-04

    Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.

  5. Analysis of a Hybrid Wing Body Center Section Test Article

    NASA Technical Reports Server (NTRS)

    Wu, Hsi-Yung T.; Shaw, Peter; Przekop, Adam

    2013-01-01

    The hybrid wing body center section test article is an all-composite structure made of crown, floor, keel, bulkhead, and rib panels utilizing the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) design concept. The primary goal of this test article is to prove that PRSEUS components are capable of carrying combined loads that are representative of a hybrid wing body pressure cabin design regime. This paper summarizes the analytical approach, analysis results, and failure predictions of the test article. A global finite element model of composite panels, metallic fittings, mechanical fasteners, and the Combined Loads Test System (COLTS) test fixture was used to conduct linear structural strength and stability analyses to validate the specimen under the most critical combination of bending and pressure loading conditions found in the hybrid wing body pressure cabin. Local detail analyses were also performed at locations with high stress concentrations, at Tee-cap noodle interfaces with surrounding laminates, and at fastener locations with high bearing/bypass loads. Failure predictions for different composite and metallic failure modes were made, and nonlinear analyses were also performed to study the structural response of the test article under combined bending and pressure loading. This large-scale specimen test will be conducted at the COLTS facility at the NASA Langley Research Center.

  6. Long-Term Effects of Changes in Cardiorespiratory Fitness and Body Mass Index on All-Cause and Cardiovascular Disease Mortality in Men: The Aerobics Center Longitudinal Study

    PubMed Central

    Lee, Duck-chul; Sui, Xuemei; Artero, Enrique G.; Lee, I-Min; Church, Timothy S.; McAuley, Paul A.; Stanford, Fatima C.; Kohl, Harold W.; Blair, Steven N.

    2011-01-01

    Background The combined associations of changes in cardiorespiratory fitness and body mass index (BMI) with mortality remain controversial and uncertain. Methods and Results We examined the independent and combined associations of changes in fitness and BMI with all-cause and cardiovascular disease (CVD) mortality in 14 345 men (mean age 44 years) with at least two medical examinations. Fitness, in metabolic equivalents (METs), was estimated from a maximal treadmill test. BMI was calculated using measured weight and height. Changes in fitness and BMI between the baseline and last examinations over 6.3 years were classified into loss, stable, or gain groups. During 11.4 years of follow-up after the last examination, 914 all-cause and 300 CVD deaths occurred. The hazard ratios (95% confidence intervals) of all-cause and CVD mortality were 0.70 (0.59 to 0.83) and 0.73 (0.54 to 0.98) for stable fitness, and 0.61 (0.51 to 0.73) and 0.58 (0.42 to 0.80) for fitness gain, respectively, compared with fitness loss in multivariable analyses including BMI change. Every 1-MET improvement was associated with 15% and 19% lower risk of all-cause and CVD mortality, respectively. BMI change was not associated with all-cause or CVD mortality after adjusting for possible confounders and fitness change. In the combined analyses, men who lost fitness had higher all-cause and CVD mortality risks regardless of BMI change. Conclusions Maintaining or improving fitness is associated with a lower risk of all-cause and CVD mortality in men. Preventing age-associated fitness loss is important for longevity regardless of BMI change. PMID:22144631

  7. Magnetic properties of Zn0.9(Mn0.05,Ni0.05)O nanoparticle: Experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    Mounkachi, O.; Lakhal, M.; Labrim, H.; Hamedoun, M.; Benyoussef, A.; El Kenz, A.; Loulidi, M.; Bhihi, M.

    2012-06-01

    The crystalline and magnetic properties of 5% Mn and 5% Ni co-doped nanocrystalline ZnO particles, obtained by the co-precipitation method, are performed. X-ray diffraction data revealed that Zn0.90Mn0.05Ni0.05O crystallizes in the monophasic wurtzite structure. DC magnetization measurement showed that the samples are paramagnetic at room temperature. However, a large increase in the magnetization is observed below 50 K. This behavior, along with the negative value of Weiss constant obtained from the linear fit of magnetic susceptibility data below room temperature, indicates ferrimagnetic behavior. The ferrimagnetic properties observed at low temperature are explained and confirmed from ab-initio calculations using the Korringa-Kohn-Rostoker method combined with the coherent potential approximation.

  8. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  9. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  10. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  11. Dose Response for Chromosome Aberrations in Human Lymphocytes and Fibroblasts after Exposure to Very Low Doses of High LET Radiation

    NASA Technical Reports Server (NTRS)

    Hada, M.; George, Kerry; Cucinotta, Francis A.

    2011-01-01

    The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.

  12. Using integrated models to minimize environmentally induced wavefront error in optomechanical design and analysis

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  13. Method for Making Measurements of the Post-Combustion Residence Time in a Gas Turbine Engine

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey H (Inventor)

    2015-01-01

    A system and method of measuring a residence time in a gas-turbine engine is provided, whereby the method includes placing pressure sensors at a combustor entrance and at a turbine exit of the gas-turbine engine and measuring a combustor pressure at the combustor entrance and a turbine exit pressure at the turbine exit. The method further includes computing cross-spectrum functions between a combustor pressure sensor signal from the measured combustor pressure and a turbine exit pressure sensor signal from the measured turbine exit pressure, applying a linear curve fit to the cross-spectrum functions, and computing a post-combustion residence time from the linear curve fit.

  14. A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shengzhi; Ming, Bo; Huang, Qiang

    It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less

  15. Water-quality trend analysis and sampling design for the Devils Lake Basin, North Dakota, January 1965 through September 2003

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2006-01-01

    This report presents the results of a study conducted by the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, the Devils Lake Basin Joint Water Resource Board, and the Red River Joint Water Resource District, to analyze historical water-quality trends in three dissolved major ions, three nutrients, and one dissolved trace element for eight stations in the Devils Lake Basin in North Dakota and to develop an efficient sampling design to monitor the future trends. A multiple-regression model was used to detect and remove streamflow-related variability in constituent concentrations. To separate the natural variability in concentration as a result of variability in streamflow from the variability in concentration as a result of other factors, the base-10 logarithm of daily streamflow was divided into four components-a 5-year streamflow anomaly, an annual streamflow anomaly, a seasonal streamflow anomaly, and a daily streamflow anomaly. The constituent concentrations then were adjusted for streamflow-related variability by removing the 5-year, annual, seasonal, and daily variability. Constituents used for the water-quality trend analysis were evaluated for a step trend to examine the effect of Channel A on water quality in the basin and a linear trend to detect gradual changes with time from January 1980 through September 2003. The fitted upward linear trends for dissolved calcium concentrations during 1980-2003 for two stations were significant. The fitted step trends for dissolved sulfate concentrations for three stations were positive and similar in magnitude. Of the three upward trends, one was significant. The fitted step trends for dissolved chloride concentrations were positive but insignificant. The fitted linear trends for the upstream stations were small and insignificant, but three of the downward trends that occurred during 1980-2003 for the remaining stations were significant. The fitted upward linear trends for dissolved nitrite plus nitrate as nitrogen concentrations during 1987-2003 for two stations were significant. However, concentrations during recent years appear to be lower than those for the 1970s and early 1980s but higher than those for the late 1980s and early 1990s. The fitted downward linear trend for dissolved ammonia concentrations for one station was significant. The fitted linear trends for total phosphorus concentrations for two stations were significant. Upward trends for total phosphorus concentrations occurred from the late 1980s to 2003 for most stations, but a small and insignificant downward trend occurred for one station. Continued monitoring will be needed to determine if the recent trend toward higher dissolved nitrite plus nitrate as nitrogen and total phosphorus concentrations continues in the future. For continued monitoring of water-quality trends in the upper Devils Lake Basin, an efficient sampling design consists of five major-ion, nutrient, and trace-element samples per year at three existing stream stations and at three existing lake stations. This sampling design requires the collection of 15 stream samples and 15 lake samples per year rather than 16 stream samples and 20 lake samples per year as in the 1992-2003 program. Thus, the design would result in a program that is less costly and more efficient than the 1992-2003 program but that still would provide the data needed to monitor water-quality trends in the Devils Lake Basin.

  16. The non-linear power spectrum of the Lyman alpha forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo

    2015-12-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less

  17. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Obreschkow, D.

    2015-09-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.

  18. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  19. A Systematic Review of Electric-Acoustic Stimulation

    PubMed Central

    Ching, Teresa Y. C.; Cowan, Robert

    2013-01-01

    Cochlear implant systems that combine electric and acoustic stimulation in the same ear are now commercially available and the number of patients using these devices is steadily increasing. In particular, electric-acoustic stimulation is an option for patients with severe, high frequency sensorineural hearing impairment. There have been a range of approaches to combining electric stimulation and acoustic hearing in the same ear. To develop a better understanding of fitting practices for devices that combine electric and acoustic stimulation, we conducted a systematic review addressing three clinical questions: what is the range of acoustic hearing in the implanted ear that can be effectively preserved for an electric-acoustic fitting?; what benefits are provided by combining acoustic stimulation with electric stimulation?; and what clinical fitting practices have been developed for devices that combine electric and acoustic stimulation? A search of the literature was conducted and 27 articles that met the strict evaluation criteria adopted for the review were identified for detailed analysis. The range of auditory thresholds in the implanted ear that can be successfully used for an electric-acoustic application is quite broad. The effectiveness of combined electric and acoustic stimulation as compared with electric stimulation alone was consistently demonstrated, highlighting the potential value of preservation and utilization of low frequency hearing in the implanted ear. However, clinical procedures for best fitting of electric-acoustic devices were varied. This clearly identified a need for further investigation of fitting procedures aimed at maximizing outcomes for recipients of electric-acoustic devices. PMID:23539259

  20. A method to combine target volume data from 3D and 4D planned thoracic radiotherapy patient cohorts for machine learning applications.

    PubMed

    Johnson, Corinne; Price, Gareth; Khalifa, Jonathan; Faivre-Finn, Corinne; Dekker, Andre; Moore, Christopher; van Herk, Marcel

    2018-02-01

    The gross tumour volume (GTV) is predictive of clinical outcome and consequently features in many machine-learned models. 4D-planning, however, has prompted substitution of the GTV with the internal gross target volume (iGTV). We present and validate a method to synthesise GTV data from the iGTV, allowing the combination of 3D and 4D planned patient cohorts for modelling. Expert delineations in 40 non-small cell lung cancer patients were used to develop linear fit and erosion methods to synthesise the GTV volume and shape. Quality was assessed using Dice Similarity Coefficients (DSC) and closest point measurements; by calculating dosimetric features; and by assessing the quality of random forest models built on patient populations with and without synthetic GTVs. Volume estimates were within the magnitudes of inter-observer delineation variability. Shape comparisons produced mean DSCs of 0.8817 and 0.8584 for upper and lower lobe cases, respectively. A model trained on combined true and synthetic data performed significantly better than models trained on GTV alone, or combined GTV and iGTV data. Accurate synthesis of GTV size from the iGTV permits the combination of lung cancer patient cohorts, facilitating machine learning applications in thoracic radiotherapy. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Cosmic Star Formation History and Evolution of the Galaxy UV Luminosity Function for z < 1

    NASA Astrophysics Data System (ADS)

    Zhang, Keming; Schiminovich, David

    2018-01-01

    We present the latest constraints on the evolution of the far-ultraviolet luminosity function of galaxies (1500 Å, UVLF hereafter) for 0 < z < 1 based on GALEX photometry, with redshift measurements from four spectroscopic and photometric-redshift catalogs: NSA, GAMA, VIPERS, and COSMOS photo-z. Our final sample consists of ~170000 galaxies, which represents the largest sample used in such studies. By integrating wide NSA and GAMA data and deep VIPERS and COSMOS photo-z data, we have been able to constrain both the bright end and the faint end of the luminosity function with high accuracy over the entire redshift range. We fit a Schechter function to our measurements of the UVLF, both to parameterize its evolution, and to integrate for SFR densities. From z~1 to z~0, the characteristic absolute magnitude of the UVLF increases linearly by ~1.5 magnitudes, while the faint end slope remains shallow (alpha < 1.5). However, the Schechter function fit exhibits an excess of galaxies at the bright end, which is accounted for by contributions from AGN. We also describe our methodology, which can be applied more generally to any combination of wide-shallow and deep-narrow surveys.

  2. MULTIMODE quantum calculations of vibrational energies and IR spectrum of the NO⁺(H₂O) cluster using accurate potential energy and dipole moment surfaces.

    PubMed

    Homayoon, Zahra

    2014-09-28

    A new, full (nine)-dimensional potential energy surface and dipole moment surface to describe the NO(+)(H2O) cluster is reported. The PES is based on fitting of roughly 32,000 CCSD(T)-F12/aug-cc-pVTZ electronic energies. The surface is a linear least-squares fit using a permutationally invariant basis with Morse-type variables. The PES is used in a Diffusion Monte Carlo study of the zero-point energy and wavefunction of the NO(+)(H2O) and NO(+)(D2O) complexes. Using the calculated ZPE the dissociation energies of the clusters are reported. Vibrational configuration interaction calculations of NO(+)(H2O) and NO(+)(D2O) using the MULTIMODE program are performed. The fundamental, a number of overtone, and combination states of the clusters are reported. The IR spectrum of the NO(+)(H2O) cluster is calculated using 4, 5, 7, and 8 modes VSCF/CI calculations. The anharmonic, coupled vibrational calculations, and IR spectrum show very good agreement with experiment. Mode coupling of the water "antisymmetric" stretching mode with the low-frequency intermolecular modes results in intensity borrowing.

  3. MULTIMODE quantum calculations of vibrational energies and IR spectrum of the NO+(H2O) cluster using accurate potential energy and dipole moment surfaces

    NASA Astrophysics Data System (ADS)

    Homayoon, Zahra

    2014-09-01

    A new, full (nine)-dimensional potential energy surface and dipole moment surface to describe the NO+(H2O) cluster is reported. The PES is based on fitting of roughly 32 000 CCSD(T)-F12/aug-cc-pVTZ electronic energies. The surface is a linear least-squares fit using a permutationally invariant basis with Morse-type variables. The PES is used in a Diffusion Monte Carlo study of the zero-point energy and wavefunction of the NO+(H2O) and NO+(D2O) complexes. Using the calculated ZPE the dissociation energies of the clusters are reported. Vibrational configuration interaction calculations of NO+(H2O) and NO+(D2O) using the MULTIMODE program are performed. The fundamental, a number of overtone, and combination states of the clusters are reported. The IR spectrum of the NO+(H2O) cluster is calculated using 4, 5, 7, and 8 modes VSCF/CI calculations. The anharmonic, coupled vibrational calculations, and IR spectrum show very good agreement with experiment. Mode coupling of the water "antisymmetric" stretching mode with the low-frequency intermolecular modes results in intensity borrowing.

  4. In silico modelling of directed evolution: Implications for experimental design and stepwise evolution.

    PubMed

    Wedge, David C; Rowe, William; Kell, Douglas B; Knowles, Joshua

    2009-03-07

    We model the process of directed evolution (DE) in silico using genetic algorithms. Making use of the NK fitness landscape model, we analyse the effects of mutation rate, crossover and selection pressure on the performance of DE. A range of values of K, the epistatic interaction of the landscape, are considered, and high- and low-throughput modes of evolution are compared. Our findings suggest that for runs of or around ten generations' duration-as is typical in DE-there is little difference between the way in which DE needs to be configured in the high- and low-throughput regimes, nor across different degrees of landscape epistasis. In all cases, a high selection pressure (but not an extreme one) combined with a moderately high mutation rate works best, while crossover provides some benefit but only on the less rugged landscapes. These genetic algorithms were also compared with a "model-based approach" from the literature, which uses sequential fixing of the problem parameters based on fitting a linear model. Overall, we find that purely evolutionary techniques fare better than do model-based approaches across all but the smoothest landscapes.

  5. A Combined Solar and Geomagnetic Index for Thermospheric Climate

    NASA Technical Reports Server (NTRS)

    Hunt, Linda; Mlynczak, Marty

    2015-01-01

    Infrared radiation from nitric oxide (NO) at 5.3 Â is a primary mechanism by which the thermosphere cools to space. The SABER instrument on the NASA TIMED satellite has been measuring thermospheric cooling by NO for over 13 years. Physically, changes in NO emission are due to changes in temperature, atomic oxygen, and the NO density. These physical changes however are driven by changes in solar irradiance and changes in geomagnetic conditions. We show that the SABER time series of globally integrated infrared power (Watts) radiated by NO can be replicated accurately by a multiple linear regression fit using the F10.7, Ap, and Dst indices. This fit enables several fundamental properties of NO cooling to be determined as well as their variability with time, permitting reconstruction of the NO power time series back nearly 70 years with extant databases of these indices. The relative roles of solar ultraviolet and geomagnetic processes in determining the NO cooling are derived and shown to be solar cycle dependent. This reconstruction provides a long-term time series of an integral radiative constraint on thermospheric climate that can be used to test climate models.

  6. The Role of Adiposity in the Association between Muscular Fitness and Cardiovascular Disease.

    PubMed

    Pérez-Bey, Alejandro; Segura-Jiménez, Víctor; Fernández-Santos, Jorge Del Rosario; Esteban-Cornejo, Irene; Gómez-Martínez, Sonia; Veiga, Oscar L; Marcos, Ascensión; Castro-Piñero, José

    2018-05-11

    To test the associations of muscular fitness and body mass index (BMI), individually and combined, with clustered cardiovascular disease risk factors in children and adolescents and to analyze the mediator role of BMI in the association between muscular fitness and clustered cardiovascular disease risk factors. In total, 239 children (113 girls) and 270 adolescents (128 girls) participated in this cross-sectional study. Height and weight were assessed, and BMI was calculated. A cardiovascular disease risk factors index (CVDRF-I) was created from the combination of the following variables: waist circumference, systolic blood pressure, triglycerides, high-density lipoprotein cholesterol, and glucose. Handgrip strength/weight and standing long jump tests were used to assess muscular fitness. A muscular fitness index was computed from the combination of both tests. Muscular fitness index was associated with CVDRF-I in children of both sexes and adolescent boys; however, these associations disappeared after accounting for BMI. BMI was associated with CVDRF-I in both children and adolescents, even after adjusting for muscular fitness (all P < .001). In male and female children and in adolescent boys, the association between muscular fitness and CVDRF-I was mediated by BMI (all P < .001). Because there was no association between muscular fitness and CVDRF-I in adolescent girls, the mediation hypothesis was discarded. BMI is an independent predictor of CVDRF-I in children and adolescents of both sexes. Conversely, the effect of muscular fitness on CVDRF-I seems to be fully mediated by BMI levels in male and female children and in adolescent boys. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Study on mathematical model to predict aerated power consumption in a gas-liquid stirred tank

    NASA Astrophysics Data System (ADS)

    Luan, Deyu; Zhang, Shengfeng; Wei, Xing; Chen, Yiming

    The aerated power consumption characteristics in a transparent tank with diameter of 0.3 m and flat bottom stirred by a Rushton impeller were investigated by means of experimental measurement. The test fluid used was tap water as liquid and air as gas. Based on Weibull model, the complete correlation of aerated power with aerated flow number was established through non-linear fit analysis. The effects of aerated rate and impeller speed on aerated power consumption were made an exploration. Results show that the changeable trend of the aerated power consumption is found to be similar under different impeller speeds and impeller diameters, i.e. the aerated power is close to dropping linear at the beginning of gas input, and then the drop tendency decreases as the aerated rate increases, at the end, the aerated power is a constant on the whole as the aerated rate reaches up the loading state. The non-linear fit curve is done using the software Origin based on the experimental data. The fairly high precision of data fit is obtained, which indicates that the mathematical model established can be used to accurately predict the aerated power consumption, comparatively. The proposed research provides a valuable instruction and reference for the design and enlargement of stirred vessel.

  8. U.S. EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS (ROCKY GAP, MD)

    EPA Science Inventory

    The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...

  9. Analyser-based phase contrast image reconstruction using geometrical optics.

    PubMed

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  10. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  11. Combined approaches to flexible fitting and assessment in virus capsids undergoing conformational change☆

    PubMed Central

    Pandurangan, Arun Prasad; Shakeel, Shabih; Butcher, Sarah Jane; Topf, Maya

    2014-01-01

    Fitting of atomic components into electron cryo-microscopy (cryoEM) density maps is routinely used to understand the structure and function of macromolecular machines. Many fitting methods have been developed, but a standard protocol for successful fitting and assessment of fitted models has yet to be agreed upon among the experts in the field. Here, we created and tested a protocol that highlights important issues related to homology modelling, density map segmentation, rigid and flexible fitting, as well as the assessment of fits. As part of it, we use two different flexible fitting methods (Flex-EM and iMODfit) and demonstrate how combining the analysis of multiple fits and model assessment could result in an improved model. The protocol is applied to the case of the mature and empty capsids of Coxsackievirus A7 (CAV7) by flexibly fitting homology models into the corresponding cryoEM density maps at 8.2 and 6.1 Å resolution. As a result, and due to the improved homology models (derived from recently solved crystal structures of a close homolog – EV71 capsid – in mature and empty forms), the final models present an improvement over previously published models. In close agreement with the capsid expansion observed in the EV71 structures, the new CAV7 models reveal that the expansion is accompanied by ∼5° counterclockwise rotation of the asymmetric unit, predominantly contributed by the capsid protein VP1. The protocol could be applied not only to viral capsids but also to many other complexes characterised by a combination of atomic structure modelling and cryoEM density fitting. PMID:24333899

  12. Small-Caliber Projectile Target Impact Angle Determined From Close Proximity Radiographs

    DTIC Science & Technology

    2006-10-01

    discrete motion data that can be numerically modeled using linear aerodynamic theory or 6-degrees-of- freedom equations of motion. The values of Fφ...Prediction Excel® Spreadsheet shown in figure 9. The Gamma at Impact Spreadsheet uses the linear aerodynamics model , equations 5 and 6, to calculate αT...trajectory angle error via consideration of the RMS fit errors of the actual firings. However, the linear aerodynamics model does not include this effect

  13. On proper linearization, construction and analysis of the Boyle-van't Hoff plots and correct calculation of the osmotically inactive volume.

    PubMed

    Katkov, Igor I

    2011-06-01

    The Boyle-van't Hoff (BVH) law of physics has been widely used in cryobiology for calculation of the key osmotic parameters of cells and optimization of cryo-protocols. The proper use of linearization of the Boyle-vant'Hoff relationship for the osmotically inactive volume (v(b)) has been discussed in a rigorous way in (Katkov, Cryobiology, 2008, 57:142-149). Nevertheless, scientists in the field have been continuing to use inappropriate methods of linearization (and curve fitting) of the BVH data, plotting the BVH line and calculation of v(b). Here, we discuss the sources of incorrect linearization of the BVH relationship using concrete examples of recent publications, analyze the properties of the correct BVH line (which is unique for a given v(b)), provide appropriate statistical formulas for calculation of v(b) from the experimental data, and propose simplistic instructions (standard operation procedure, SOP) for proper normalization of the data, appropriate linearization and construction of the BVH plots, and correct calculation of v(b). The possible sources of non-linear behavior or poor fit of the data to the proper BVH line such as active water and/or solute transports, which can result in large discrepancy between the hyperosmotic and hypoosmotic parts of the BVH plot, are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Relationships between the antibacterial activity of sodium hypochlorite and treatment time and biofilm age in early Enterococcus faecalis biofilms.

    PubMed

    Chau, N P T; Chung, N H; Jeon, J G

    2015-08-01

    To determine the relationships between the antibacterial activity of NaOCl and treatment time and biofilm age in early Enterococcus faecalis biofilms using a linear fitting procedure. Enterococcus faecalis biofilms were formed on hydroxyapatite discs. To investigate the relationship between the antibacterial activity of NaOCl and biofilm age, 22-, 46-, 70- and 94-h-old biofilms were exposed to NaOCl (0-3%) for 5 min. To investigate the relationship between the antibacterial activity of NaOCl and treatment time, 70-h-old biofilms were exposed to NaOCl (0-3%) for 1, 3, 5 and 7 min. After treatment, colony-forming units (CFUs) were counted. To determine the relationships between these variables, linear fitting was performed. The change in the minimum biofilm eradication concentration (MBEC) of NaOCl followed a linear pattern of biofilm age (R = 0.941, R(2)  = 0.886) or treatment time dependence (R = -0.948, R(2)  = 0.898). Below the MBEC, the fitting lines for bacterial CFU count versus NaOCl concentration (R ≤ -0.973, R(2)  ≥ 0.948) in the 22-, 46-, 70- and 94-h-old biofilms implied that the antibacterial activity of NaOCl decreased as the biofilm age increased. The fitting lines for bacterial CFU count versus NaOCl concentration (R ≤ -0.970, R(2)  ≥ 0.942) in the 1-, 3-, 5- and 7-min treatments implied that the antibacterial activity of NaOCl increased with treatment time. These results suggest that the antibacterial activity of NaOCl against early E. faecalis biofilms in root canals may follow a linear pattern depending on biofilm age or treatment time. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  15. GPS and Relative Sea-level Constraints on Glacial Isostatic Adjustment in North America

    NASA Astrophysics Data System (ADS)

    James, T. S.; Simon, K.; Henton, J. A.; Craymer, M.

    2015-12-01

    Recently, new GIA models have been developed for the Innuitian Ice Sheet and for the north-central portion of the Laurentide Ice Sheet (Simon, 2014; Simon et al., 2015). This new combined model, herein called Innu-Laur15, was developed from the ICE-5G model and load adjustments were made to improve the fit to relative sea-level observations and to GPS-constrained vertical crustal motion in the Canadian Arctic Archipelago and around Hudson Bay. Here, the predictions of Innu-Laur15 are compared to observations and other GIA models over an extended region comprising much of North America east of the Rocky Mountains. GIA predictions are made using compressible Maxwell Earth models with gravitationally self-consistent ocean loading, changing coastlines, and ocean-water inundation where marine ice retreats or floats. For this study, GPS time series are the NA12 solution (Blewitt et al., 2013) downloaded from http://geodesy.unr.edu/NGLStationPages/GlobalStationList and fit with a linear trend, annual and semi-annual terms, and offsets as indicated by station logs and by inspection of the time series. For example, a comparison of GPS observations of vertical crustal motion from the NA12 solution at 360 sites gives root-mean-square (RMS) residuals of 3.2 mm/yr (null hypothesis), 1.8 mm/yr (Innu-Laur15), and 2.9 mm/yr (ICE-5G) for the VM5a Earth model. Preliminary comparisons with other Earth models give similar patterns where Innu-Laur15 provides a better fit than ICE-5G. Further adjustments to the Innu-Laur15 ice sheet history could improve the fit to GPS rates in other regions of North America.

  16. Concentration-response of short-term ozone exposure and hospital admissions for asthma in Texas.

    PubMed

    Zu, Ke; Liu, Xiaobin; Shi, Liuhua; Tao, Ge; Loftus, Christine T; Lange, Sabine; Goodman, Julie E

    2017-07-01

    Short-term exposure to ozone has been associated with asthma hospital admissions (HA) and emergency department (ED) visits, but the shape of the concentration-response (C-R) curve is unclear. We conducted a time series analysis of asthma HAs and ambient ozone concentrations in six metropolitan areas in Texas from 2001 to 2013. Using generalized linear regression models, we estimated the effect of daily 8-hour maximum ozone concentrations on asthma HAs for all ages combined, and for those aged 5-14, 15-64, and 65+years. We fit penalized regression splines to evaluate the shape of the C-R curves. Using a log-linear model, estimated risk per 10ppb increase in average daily 8-hour maximum ozone concentrations was highest for children (relative risk [RR]=1.047, 95% confidence interval [CI]: 1.025-1.069), lower for younger adults (RR=1.018, 95% CI: 1.005-1.032), and null for older adults (RR=1.002, 95% CI: 0.981-1.023). However, penalized spline models demonstrated significant nonlinear C-R relationships for all ages combined, children, and younger adults, indicating the existence of thresholds. We did not observe an increased risk of asthma HAs until average daily 8-hour maximum ozone concentrations exceeded approximately 40ppb. Ozone and asthma HAs are significantly associated with each other; susceptibility to ozone is age-dependent, with children at highest risk. C-R relationships between average daily 8-hour maximum ozone concentrations and asthma HAs are significantly curvilinear for all ages combined, children, and younger adults. These nonlinear relationships, as well as the lack of relationship between average daily 8-hour maximum and peak ozone concentrations, have important implications for assessing risks to human health in regulatory settings. Copyright © 2017. Published by Elsevier Ltd.

  17. [Sensitivity and specificity of nested PCR pyrosequencing in hepatitis B virus drug resistance gene testing].

    PubMed

    Sun, Shumei; Zhou, Hao; Zhou, Bin; Hu, Ziyou; Hou, Jinlin; Sun, Jian

    2012-05-01

    To evaluate the sensitivity and specificity of nested PCR combined with pyrosequencing in the detection of HBV drug-resistance gene. RtM204I (ATT) mutant and rtM204 (ATG) nonmutant plasmids mixed at different ratios were detected for mutations using nested-PCR combined with pyrosequencing, and the results were compared with those by conventional PCR pyrosequencing to analyze the linearity and consistency of the two methods. Clinical specimens with different viral loads were examined for drug-resistant mutations using nested PCR pyrosequencing and nested PCR combined with dideoxy sequencing (Sanger) for comparison of the detection sensitivity and specificity. The fitting curves demonstrated good linearity of both conventional PCR pyrosequencing and nested PCR pyrosequencing (R(2)>0.99, P<0.05). Nested PCR showed a better consistency with the predicted value than conventional PCR, and was superior to conventional PCR for detection of samples containing 90% mutant plasmid. In the detection of clinical specimens, Sanger sequencing had a significantly lower sensitivity than nested PCR pyrosequencing (92% vs 100%, P<0.01). The detection sensitivity of Sanger sequencing varied with the viral loads, especially in samples with low viral copies (HBV DNA ≤3log10 copies/ml), where the sensitivity was 78%, significantly lower than that of pyrosequencing (100%, P<0.01). Neither of the two methods yielded positive results for the negative control samples, suggesting their good specificity. Compared with nested PCR and Sanger sequencing method, nested PCR pyrosequencing has a higher sensitivity especially in clinical specimens with low viral copies, which can be important for early detection of HBV mutant strains and hence more effective clinical management.

  18. Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.

    PubMed

    Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna

    2018-03-01

    A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.

  19. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  20. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  1. Relativity Parameters Determined from Lunar Laser Ranging

    NASA Technical Reports Server (NTRS)

    Williams, J. G.; Newhall, X. X.; Dickey, J. O.

    1996-01-01

    Analysis of 24 years of lunar laser ranging data is used to test the principle of equivalence, geodetic precession, the PPN parameters beta and gamma, and G/G. Recent data can be fitted with a rms scatter of 3 cm. (a) Using the Nordtvedt effect to test the principle of equivalence, it is found that the Moon and Earth accelerate alike in the Sun's field. The relative accelerations match to within 5 x 10(exp -13) . This limit, combined with an independent determination of y from planetary time delay, gives beta. Including the uncertainty due to compositional differences, the parameter beta differs from unity by no more than 0.0014; and, if the weak equivalence principle is satisfied, the difference is no more than 0.0006. (b) Geodetic precession matches its expected 19.2 marc sec/yr rate within 0.7%. This corresponds to a 1% test of gamma. (c) Apart from the Nordtvedt effect, beta and gamma can be tested from their influence on the lunar orbit. It is argued theoretically that the linear combination 0.8(beta) + 1.4(gamma) can be tested at the 1% level of accuracy. For solutions using numerically derived partial derivatives, higher sensitivity is found. Both 6 and y match the values of general relativity to within 0.005, and the linear combination beta+ gamma matches to within 0,003, but caution is advised due to the lack of theoretical understanding of these sensitivities. (d) No evidence for a changing gravitational constant is found, with absolute value of G/G less than or equal to 8 x lO(exp -12)/yr. There is significant sensitivity to G/G through solar perturbations on the lunar orbit.

  2. Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes

    USGS Publications Warehouse

    Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.

    2014-01-01

    The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otake, M.; Schull, W.J.

    The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less

  4. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  5. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  6. Calculation of Hammett Equation parameters for some N,N‧-bis (substituted-phenyl)-1,4-quinonediimines by density functional theory

    NASA Astrophysics Data System (ADS)

    Sein, Lawrence T.

    2011-08-01

    Hammett parameters σ' were determined from vertical ionization potentials, vertical electron affinities, adiabatic ionization potentials, adiabatic electron affinities, HOMO, and LUMO energies of a series of N, N' -bis (3',4'-substituted-phenyl)-1,4-quinonediimines computed at the B3LYP/6-311+G(2d,p) level on B3LYP/6-31G ∗ molecular geometries. These parameters were then least squares fit as a function of literature Hammett parameters. For N, N' -bis (4'-substituted-phenyl)-1,4-quinonediimines, the least squares fits demonstrated excellent linearity, with the square of Pearson's correlation coefficient ( r2) greater than 0.98 for all isomers. For N, N' -bis (3'-substituted-3'-aminophenyl)-1,4-quinonediimines, the least squares fits were less nearly linear, with r2 approximately 0.70 for all isomers when derived from calculated vertical ionization potentials, but those from calculated vertical electron affinities usually greater than 0.90.

  7. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  8. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  9. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  10. What Physical Fitness Component Is Most Closely Associated With Adolescents' Blood Pressure?

    PubMed

    Nunes, Heloyse E G; Alves, Carlos A S; Gonçalves, Eliane C A; Silva, Diego A S

    2017-12-01

    This study aimed to determine which of four selected physical fitness variables, would be most associated with blood pressure changes (systolic and diastolic) in a large sample of adolescents. This was a descriptive and cross-sectional, epidemiological study of 1,117 adolescents aged 14-19 years from southern Brazil. Systolic and diastolic blood pressure were measured by a digital pressure device, and the selected physical fitness variables were body composition (body mass index), flexibility (sit-and-reach test), muscle strength/resistance (manual dynamometer), and aerobic fitness (Modified Canadian Aerobic Fitness Test). Simple and multiple linear regression analyses revealed that aerobic fitness and muscle strength/resistance best explained variations in systolic blood pressure for boys (17.3% and 7.4% of variance) and girls (7.4% of variance). Aerobic fitness, body composition, and muscle strength/resistance are all important indicators of blood pressure control, but aerobic fitness was a stronger predictor of systolic blood pressure in boys and of diastolic blood pressure in both sexes.

  11. Limitations of inclusive fitness.

    PubMed

    Allen, Benjamin; Nowak, Martin A; Wilson, Edward O

    2013-12-10

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.

  12. Multigenerational effects of inbreeding in Cucurbita pepo ssp. texana (Cucurbitaceae).

    PubMed

    Hayes, C Nelson; Winsor, James A; Stephenson, Andrew G

    2005-02-01

    The shape of the fitness function relating the decline in fitness with coefficient of inbreeding (f) can provide evidence concerning the genetic basis of inbreeding depression, but few studies have examined inbreeding depression across a range of f using noncultivated species. Futhermore, studies have rarely examined the effects of inbreeding depression in the maternal parent on offspring fitness. To estimate the shape of the fitness function, we examined the relationship between f and fitness across a range off from 0.000 to 0.875 for components of both male and female fitness in Cucurbita pepo ssp. texana. Each measure of female fitness declined with f, including pistillate flower number, fruit number, seed number per fruit, seed mass per fruit, and percentage seed germination. Several aspects of male fitness also declined with f, including staminate flower number, pollen number per flower, and the number of days of flowering, although cumulative inbreeding depression was less severe for male (0.34) than for female function (0.39). Fitness tended to decline linearly with f between f = 0.00 and f = 0.75 for most traits and across cumulative lifetime fitness (mean = 0.66), suggesting that individual genes causing inbreeding depression are additive and the result of many alleles of small effect. However, most traits also showed a small reduction in inbreeding depression between f = 0.75 and f = 0.875, and evidence of purging or diminishing epistasis was found for in vitro pollen-tube growth rate. To examine inbreeding depression as a maternal effect, we performed outcross pollinations on f = 0.0 and f = 0.5 mothers and found that depression due to maternal inbreeding was 0.07, compared to 0.10 for offspring produced through one generation of selfing. In at least some families, maternal inbreeding reduced fruit number, seed number and mass, staminate flower number, pollen diameter, and pollen-tube growth rate. Collectively these results suggest that, while the fitness function appears to be largely linear for most traits, maternal effects may compound the effects of inbreeding depression in multigenerational studies, though this may be partially offset by purging or diminishing epistasis.

  13. Are non-linearity effects of absorption important for MAX-DOAS observations?

    NASA Astrophysics Data System (ADS)

    Pukite, Janis; Wang, Yang; Wagner, Thomas

    2017-04-01

    For scattered light observations the absorption optical depth depends non-linearly on the trace gas concentrations if their absorption is strong. This is the case because the Beer-Lambert law is generally not applicable for scattered light measurements due to many (i.e. more than one) light paths contributing to the measurement. While in many cases a linear approximation can be made, for scenarios with strong absorption non-linear effects cannot always be neglected. This is especially the case for observation geometries with spatially extended and diffuse light paths, especially in satellite limb geometry but also for nadir measurements as well. Fortunately the effects of non-linear effects can be quantified by means of expanding the radiative transfer equation in a Taylor series with respect to the trace gas absorption coefficients. Herewith if necessary (1) the higher order absorption structures can be described as separate fit parameters in the DOAS fit and (2) the algorithm constraints of retrievals of VCDs and profiles can be improved by considering higher order sensitivity parameters. In this study we investigate the contribution of the higher order absorption structures for MAX-DOAS observation geometry for different atmospheric and ground properties (cloud and aerosol effects, trace gas amount, albedo) and geometry (different Sun and viewing angles).

  14. A two-dimensional spectrum analysis for sedimentation velocity experiments of mixtures with heterogeneity in molecular weight and shape.

    PubMed

    Brookes, Emre; Cao, Weiming; Demeler, Borries

    2010-02-01

    We report a model-independent analysis approach for fitting sedimentation velocity data which permits simultaneous determination of shape and molecular weight distributions for mono- and polydisperse solutions of macromolecules. Our approach allows for heterogeneity in the frictional domain, providing a more faithful description of the experimental data for cases where frictional ratios are not identical for all components. Because of increased accuracy in the frictional properties of each component, our method also provides more reliable molecular weight distributions in the general case. The method is based on a fine grained two-dimensional grid search over s and f/f (0), where the grid is a linear combination of whole boundary models represented by finite element solutions of the Lamm equation with sedimentation and diffusion parameters corresponding to the grid points. A Monte Carlo approach is used to characterize confidence limits for the determined solutes. Computational algorithms addressing the very large memory needs for a fine grained search are discussed. The method is suitable for globally fitting multi-speed experiments, and constraints based on prior knowledge about the experimental system can be imposed. Time- and radially invariant noise can be eliminated. Serial and parallel implementations of the method are presented. We demonstrate with simulated and experimental data of known composition that our method provides superior accuracy and lower variance fits to experimental data compared to other methods in use today, and show that it can be used to identify modes of aggregation and slow polymerization.

  15. Development of a dynamic growth-death model for Escherichia coli O157:H7 in minimally processed leafy green vegetables.

    PubMed

    McKellar, Robin C; Delaquis, Pascal

    2011-11-15

    Escherichia coli O157:H7, an occasional contaminant of fresh produce, can present a serious health risk in minimally processed leafy green vegetables. A good predictive model is needed for Quantitative Risk Assessment (QRA) purposes, which adequately describes the growth or die-off of this pathogen under variable temperature conditions experienced during processing, storage and shipping. Literature data on behaviour of this pathogen on fresh-cut lettuce and spinach was taken from published graphs by digitization, published tables or from personal communications. A three-phase growth function was fitted to the data from 13 studies, and a square root model for growth rate (μ) as a function of temperature was derived: μ=(0.023*(Temperature-1.20))(2). Variability in the published data was incorporated into the growth model by the use of weighted regression and the 95% prediction limits. A log-linear die-off function was fitted to the data from 13 studies, and the resulting rate constants were fitted to a shifted lognormal distribution (Mean: 0.013; Standard Deviation, 0.010; Shift, 0.001). The combined growth-death model successfully predicted pathogen behaviour under both isothermal and non-isothermal conditions when compared to new published data. By incorporating variability, the resulting model is an improvement over existing ones, and is suitable for QRA applications. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  16. Stellar Absorption Line Analysis of Local Star-forming Galaxies: The Relation between Stellar Mass, Metallicity, Dust Attenuation, and Star Formation Rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabran Zahid, H.; Kudritzki, Rolf-Peter; Ho, I-Ting

    We analyze the optical continuum of star-forming galaxies in the Sloan Digital Sky Survey by fitting stacked spectra with stellar population synthesis models to investigate the relation between stellar mass, stellar metallicity, dust attenuation, and star formation rate. We fit models calculated with star formation and chemical evolution histories that are derived empirically from multi-epoch observations of the stellar mass–star formation rate and the stellar mass–gas-phase metallicity relations, respectively. We also fit linear combinations of single-burst models with a range of metallicities and ages. Star formation and chemical evolution histories are unconstrained for these models. The stellar mass–stellar metallicity relationsmore » obtained from the two methods agree with the relation measured from individual supergiant stars in nearby galaxies. These relations are also consistent with the relation obtained from emission-line analysis of gas-phase metallicity after accounting for systematic offsets in the gas-phase metallicity. We measure dust attenuation of the stellar continuum and show that its dependence on stellar mass and star formation rate is consistent with previously reported results derived from nebular emission lines. However, stellar continuum attenuation is smaller than nebular emission line attenuation. The continuum-to-nebular attenuation ratio depends on stellar mass and is smaller in more massive galaxies. Our consistent analysis of stellar continuum and nebular emission lines paves the way for a comprehensive investigation of stellar metallicities of star-forming and quiescent galaxies.« less

  17. Near-infrared Raman spectroscopy for estimating biochemical changes associated with different pathological conditions of cervix

    NASA Astrophysics Data System (ADS)

    Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu

    2018-02-01

    The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.

  18. The Evolution of El Nino-Precipitation Relationships from Satellites and Gauges

    NASA Technical Reports Server (NTRS)

    Curtis, Scott; Adler, Robert F.; Starr, David OC (Technical Monitor)

    2002-01-01

    This study uses a twenty-three year (1979-2001) satellite-gauge merged community data set to further describe the relationship between El Nino Southern Oscillation (ENSO) and precipitation. The globally complete precipitation fields reveal coherent bands of anomalies that extend from the tropics to the polar regions. Also, ENSO-precipitation relationships were analyzed during the six strongest El Ninos from 1979 to 2001. Seasons of evolution, Pre-onset, Onset, Peak, Decay, and Post-decay, were identified based on the strength of the El Nino. Then two simple and independent models, first order harmonic and linear, were fit to the monthly time series of normalized precipitation anomalies for each grid block. The sinusoidal model represents a three-phase evolution of precipitation, either dry-wet-dry or wet-dry-wet. This model is also highly correlated with the evolution of sea surface temperatures in the equatorial Pacific. The linear model represents a two-phase evolution of precipitation, either dry-wet or wet-dry. These models combine to account for over 50% of the precipitation variability for over half the globe during El Nino. Most regions, especially away from the Equator, favor the linear model. Areas that show the largest trend from dry to wet are southeastern Australia, eastern Indian Ocean, southern Japan, and off the coast of Peru. The northern tropical Pacific and Southeast Asia show the opposite trend.

  19. Meshless Method with Operator Splitting Technique for Transient Nonlinear Bioheat Transfer in Two-Dimensional Skin Tissues

    PubMed Central

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-01

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180

  20. Meshless method with operator splitting technique for transient nonlinear bioheat transfer in two-dimensional skin tissues.

    PubMed

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-16

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.

  1. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. First-order symmetry-adapted perturbation theory for multiplet splittings.

    PubMed

    Patkowski, Konrad; Żuchowski, Piotr S; Smith, Daniel G A

    2018-04-28

    We present a symmetry-adapted perturbation theory (SAPT) for the interaction of two high-spin open-shell molecules (described by their restricted open-shell Hartree-Fock determinants) resulting in low-spin states of the complex. The previously available SAPT formalisms, except for some system-specific studies for few-electron complexes, were restricted to the high-spin state of the interacting system. Thus, the new approach provides, for the first time, a SAPT-based estimate of the splittings between different spin states of the complex. We have derived and implemented the lowest-order SAPT term responsible for these splittings, that is, the first-order exchange energy. We show that within the so-called S 2 approximation commonly used in SAPT (neglecting effects that vanish as fourth or higher powers of intermolecular overlap integrals), the first-order exchange energies for all multiplets are linear combinations of two matrix elements: a diagonal exchange term that determines the spin-averaged effect and a spin-flip term responsible for the splittings between the states. The numerical factors in this linear combination are determined solely by the Clebsch-Gordan coefficients: accordingly, the S 2 approximation implies a Heisenberg Hamiltonian picture with a single coupling strength parameter determining all the splittings. The new approach is cast into both molecular-orbital and atomic-orbital expressions: the latter enable an efficient density-fitted implementation. We test the newly developed formalism on several open-shell complexes ranging from diatomic systems (Li⋯H, Mn⋯Mn, …) to the phenalenyl dimer.

  3. First-order symmetry-adapted perturbation theory for multiplet splittings

    NASA Astrophysics Data System (ADS)

    Patkowski, Konrad; Żuchowski, Piotr S.; Smith, Daniel G. A.

    2018-04-01

    We present a symmetry-adapted perturbation theory (SAPT) for the interaction of two high-spin open-shell molecules (described by their restricted open-shell Hartree-Fock determinants) resulting in low-spin states of the complex. The previously available SAPT formalisms, except for some system-specific studies for few-electron complexes, were restricted to the high-spin state of the interacting system. Thus, the new approach provides, for the first time, a SAPT-based estimate of the splittings between different spin states of the complex. We have derived and implemented the lowest-order SAPT term responsible for these splittings, that is, the first-order exchange energy. We show that within the so-called S2 approximation commonly used in SAPT (neglecting effects that vanish as fourth or higher powers of intermolecular overlap integrals), the first-order exchange energies for all multiplets are linear combinations of two matrix elements: a diagonal exchange term that determines the spin-averaged effect and a spin-flip term responsible for the splittings between the states. The numerical factors in this linear combination are determined solely by the Clebsch-Gordan coefficients: accordingly, the S2 approximation implies a Heisenberg Hamiltonian picture with a single coupling strength parameter determining all the splittings. The new approach is cast into both molecular-orbital and atomic-orbital expressions: the latter enable an efficient density-fitted implementation. We test the newly developed formalism on several open-shell complexes ranging from diatomic systems (Li⋯H, Mn⋯Mn, …) to the phenalenyl dimer.

  4. Combined effects of water temperature and copper ion concentration on catalase activity in Crassostrea ariakensis

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Yang, Hongshuai; Liu, Jiahui; Li, Yanhong; Liu, Zhigang

    2015-07-01

    A central composite experimental design and response surface method were used to investigate the combined effects of water temperature (18-34°C) and copper ion concentration (0.1-1.5 mg/L) on the catalase (CAT) activity in the digestive gland of Crassostrea ariakensis. The results showed that the linear effects of temperature were significant ( P<0.01), the quadratic effects of temperature were significant ( P<0.05), the linear effects of copper ion concentration were not significant ( P>0.05), and the quadratic effects of copper ion concentration were significant ( P<0.05). Additionally, the synergistic effects of temperature and copper ion concentration were not significant ( P>0.05), and the effect of temperature was greater than that of copper ion concentration. A model equation of CAT enzyme activity in the digestive gland of C. ariakensis toward the two factors of interest was established, with R 2, Adj. R 2 and Pred. R 2 values as high as 0.943 7, 0.887 3 and 0.838 5, respectively. These findings suggested that the goodness of fit to experimental data and predictive capability of the model were satisfactory, and could be practically applied for prediction under the conditions of the study. Overall, the results suggest that the simultaneous variation of temperature and copper ion concentration alters the activity of the antioxidant enzyme CAT by modulating active oxygen species metabolism, which may be utilized as a biomarker to detect the effects of copper pollution.

  5. Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance

    PubMed Central

    Chan, Emily H.; Sahai, Vikram; Conrad, Corrie; Brownstein, John S.

    2011-01-01

    Background A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Methodology/Principal Findings Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003–2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Conclusions/Significance Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance. PMID:21647308

  6. Aesthetic properties and message customization: navigating the dark side of web recruitment.

    PubMed

    Dineen, Brian R; Ling, Juan; Ash, Steven R; DelVecchio, Devon

    2007-03-01

    The authors examined recruitment message viewing time, information recall, and attraction in a Web-based context. In particular, they extended theory related to the cognitive processing of recruitment messages and found that the provision of customized information about likely fit related to increased viewing time and recall when good aesthetics were also present. A 3-way interaction among moderate-to low-fitting individuals further indicated that objective fit was most strongly related to attraction when messages included both good aesthetics and customized information. In particular, given this combination, the poorest fitting individuals exhibited lower attraction levels, whereas more moderately fitting individuals exhibited invariant attraction levels across combinations of aesthetics and customized information. The results suggest that, given good aesthetics, customized information exerts effects mostly by causing poorly fitting individuals to be less attracted, which further suggests a means of averting the "dark side" of Web recruitment that occurs when organizations receive too many applications from poorly fitting applicants. (c) 2007 APA, all rights reserved.

  7. Extraction of Generalized Parton Distributions from combined Deeply Virtual Compton Scattering and Timelike Compton scattering fits

    NASA Astrophysics Data System (ADS)

    Boer, Marie

    2017-09-01

    Generalized Parton Distributions (GPDs) contain the correlation between the parton's longitudinal momentum and their transverse distribution. They are accessed through hard exclusive processes, such as Deeply Virtual Compton Scattering (DVCS). DVCS has already been measured in several experiments and several models allow for extracting GPDs from these measurements. Timelike Compton Scattering (TCS) is, at leading order, the time-reversal equivalent process to DVCS and accesses GPDs at the same kinematics. Comparing GPDs extracted from DVCS and TCS is a unique way for proving GPD universality. Combining fits from the two processes will also allow for better constraining the GPDs. We will present our method for extracting GPDs from DVCS and TCS pseudo-data. We will compare fit results from the two processes in similar conditions and present what can be expected in term of contraints on GPDs from combined fits.

  8. Estimating sunspot number

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.

    1984-01-01

    An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.

  9. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  10. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Andringa, S.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties aboutmore » physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less

  11. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröoder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zong, Z.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ṡ 1018 eV, i.e. the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.

  12. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    PubMed

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Waveform Tomography of the South Atlantic Region

    NASA Astrophysics Data System (ADS)

    Celli, N. L.; Lebedev, S.; Schaeffer, A. J.; Gaina, C.

    2016-12-01

    The rapid growth in broadband seismic data, along with developments in waveform tomography techniques, allow us to greatly improve the data sampling in the southern hemisphere and resolve the upper-mantle structure beneath the South Atlantic region at a new level of detail. We have gathered a very large waveform dataset, including all publicly available data from permanent and temporary networks. Our S-velocity tomographic model is constrained by vertical-component waveform fits, computed using the Automated Multimode Inversion of surface, S and multiple S waves. Each seismogram fit provides a set of linear equations describing 1D average velocity perturbations within approximate sensitivity volumes, with respect to a 3D reference model. All the equations are then combined into a large linear system and inverted jointly for a model of shear- and compressional-wave speeds and azimuthal anisotropy within the lithosphere and underlying mantle. The isotropic-average shear speeds are proxies for temperature and composition at depth, while azimuthal anisotropy provides evidence on the past and present deformation in the lithosphere and asthenosphere beneath the region. We resolve the complex boundaries of the mantle roots of South America's and Africa's cratons and the deep low-velocity anomalies beneath volcanic areas in South America. Pronounced lithospheric high seismic velocity anomalies beneath the Argentine Basin suggest that its anomalously deep seafloor, previously attributed to dynamic topography, is mainly due to anomalously cold, thick lithosphere. Major hotspots show low-velocity anomalies extending substantially deeper than those beneath the mid-ocean ridge. The Vema Hotspot shows a major, hot asthenospheric anomaly beneath thick, cold oceanic lithosphere. The mantle lithosphere beneath the Walvis Ridge—a hotspot track—shows normal cooling. The volcanic Cameroon Line, in contrast, is characterized by thin lithosphere beneath the locations of recent volcanism.

  14. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  15. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    PubMed Central

    Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-01-01

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415

  16. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    PubMed

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  17. Linear algebraic methods applied to intensity modulated radiation therapy.

    PubMed

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  18. A program for identification of linear systems

    NASA Technical Reports Server (NTRS)

    Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.

    1971-01-01

    A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.

  19. Evolution of complex dynamics

    NASA Astrophysics Data System (ADS)

    Wilds, Roy; Kauffman, Stuart A.; Glass, Leon

    2008-09-01

    We study the evolution of complex dynamics in a model of a genetic regulatory network. The fitness is associated with the topological entropy in a class of piecewise linear equations, and the mutations are associated with changes in the logical structure of the network. We compare hill climbing evolution, in which only mutations that increase the fitness are allowed, with neutral evolution, in which mutations that leave the fitness unchanged are allowed. The simple structure of the fitness landscape enables us to estimate analytically the rates of hill climbing and neutral evolution. In this model, allowing neutral mutations accelerates the rate of evolutionary advancement for low mutation frequencies. These results are applicable to evolution in natural and technological systems.

  20. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

Top