Sample records for exponential regression analysis

  1. Method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  2. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  3. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  4. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model.

    PubMed

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.

  5. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  6. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model

    PubMed Central

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913

  7. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  8. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  9. Year-round measurements of CH4 exchange in a forested drained peatland using automated chambers

    NASA Astrophysics Data System (ADS)

    Korkiakoski, Mika; Koskinen, Markku; Penttilä, Timo; Arffman, Pentti; Ojanen, Paavo; Minkkinen, Kari; Laurila, Tuomas; Lohila, Annalea

    2016-04-01

    Pristine peatlands are usually carbon accumulating ecosystems and sources of methane (CH4). Draining peatlands for forestry increases the thickness of the oxic layer, thus enhancing CH4 oxidation which leads to decreased CH4 emissions. Closed chambers are commonly used in estimating the greenhouse gas exchange between the soil and the atmosphere. However, the closed chamber technique alters the gas concentration gradient making the concentration development against time non-linear. Selecting the correct fitting method is important as it can be the largest source of uncertainty in flux calculation. We measured CH4 exchange rates and their diurnal and seasonal variations in a nutrient-rich drained peatland located in southern Finland. The original fen was drained for forestry in 1970s and now the tree stand is a mixture of Scots pine, Norway spruce and Downy birch. Our system consisted of six transparent polycarbonate chambers and stainless steel frames, positioned on different types of field and moss layer. During winter, the frame was raised above the snowpack with extension collars and the height of the snowpack inside the chamber was measured regularly. The chambers were closed hourly and the sample gas was sucked into a cavity ring-down spectrometer and analysed for CH4, CO2 and H2O concentration with 5 second time resolution. The concentration change in time in the beginning of a closure was determined with linear and exponential fits. The results show that linear regression systematically underestimated the CH4 flux when compared to exponential regression by 20-50 %. On the other hand, the exponential regression seemed not to work reliably with small fluxes (< 3.5 μg CH4 m-2 h-1): using exponential regression in such cases typically resulted in anomalously large fluxes and high deviation. Due to these facts, we recommend first calculating the flux with the linear regression and, if the flux is high enough, calculate the flux again using the exponential regression and use this value in later analysis. The forest floor at the site (including the ground vegetation) acted as a CH4 sink most of the time. CH4 emission peaks were occasionally observed, particularly in spring during the snow melt, and during rainfall events in summer. Diurnal variation was observed mainly in summer. The net CH4 exchange for the two year measurement period in the six chambers varied from -31 to -155 mg CH4 m-2 yr-1, the average being -67 mg CH4 m-2 yr-1. However, this does not include the ditches which typically act as a significant source for CH4.

  10. A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences

    PubMed Central

    Feingold, Alan

    2013-01-01

    The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615

  11. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  12. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-11-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach has been justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatlands sites in Finland and a tundra site in Siberia. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. However, a rather large percentage of the exponential regression functions showed curvatures not consistent with the theoretical model which is considered to be caused by violations of the underlying model assumptions. Especially the effects of turbulence and pressure disturbances by the chamber deployment are suspected to have caused unexplainable curvatures. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes. The degree of underestimation increased with increasing CO2 flux strength and was dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  13. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  14. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  15. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  16. Piecewise exponential survival times and analysis of case-cohort data.

    PubMed

    Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H

    2012-06-15

    Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Real-time soil sensing based on fiber optics and spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Minzan

    2005-08-01

    Using NIR spectroscopic techniques, correlation analysis and regression analysis for soil parameter estimation was conducted with raw soil samples collected in a cornfield and a forage field. Soil parameters analyzed were soil moisture, soil organic matter, nitrate nitrogen, soil electrical conductivity and pH. Results showed that all soil parameters could be evaluated by NIR spectral reflectance. For soil moisture, a linear regression model was available at low moisture contents below 30 % db, while an exponential model can be used in a wide range of moisture content up to 100 % db. Nitrate nitrogen estimation required a multi-spectral exponential model and electrical conductivity could be evaluated by a single spectral regression. According to the result above mentioned, a real time soil sensor system based on fiber optics and spectroscopy was developed. The sensor system was composed of a soil subsoiler with four optical fiber probes, a spectrometer, and a control unit. Two optical fiber probes were used for illumination and the other two optical fiber probes for collecting soil reflectance from visible to NIR wavebands at depths around 30 cm. The spectrometer was used to obtain the spectra of reflected lights. The control unit consisted of a data logging device, a personal computer, and a pulse generator. The experiment showed that clear photo-spectral reflectance was obtained from the underground soil. The soil reflectance was equal to that obtained by the desktop spectrophotometer in laboratory tests. Using the spectral reflectance, the soil parameters, such as soil moisture, pH, EC and SOM, were evaluated.

  18. Methane exchange at the peatland forest floor - automatic chamber system exposes the dynamics of small fluxes

    NASA Astrophysics Data System (ADS)

    Korkiakoski, Mika; Tuovinen, Juha-Pekka; Aurela, Mika; Koskinen, Markku; Minkkinen, Kari; Ojanen, Paavo; Penttilä, Timo; Rainne, Juuso; Laurila, Tuomas; Lohila, Annalea

    2017-04-01

    We measured methane (CH4) exchange rates with automatic chambers at the forest floor of a nutrient-rich drained peatland in 2011-2013. The fen, located in southern Finland, was drained for forestry in 1969 and the tree stand is now a mixture of Scots pine, Norway spruce, and pubescent birch. Our measurement system consisted of six transparent chambers and stainless steel frames, positioned on a number of different field and moss layer compositions. Gas concentrations were measured with an online cavity ring-down spectroscopy gas analyzer. Fluxes were calculated with both linear and exponential regression. The use of linear regression resulted in systematically smaller CH4 fluxes by 10-45 % as compared to exponential regression. However, the use of exponential regression with small fluxes ( < 2.5 µg CH4 m-2 h-1) typically resulted in anomalously large absolute fluxes and high hour-to-hour deviations. Therefore, we recommend that fluxes are initially calculated with linear regression to determine the threshold for low fluxes and that higher fluxes are then recalculated using exponential regression. The exponential flux was clearly affected by the length of the fitting period when this period was < 190 s, but stabilized with longer periods. Thus, we also recommend the use of a fitting period of several minutes to stabilize the results and decrease the flux detection limit. There were clear seasonal dynamics in the CH4 flux: the forest floor acted as a CH4 sink particularly from early summer until the end of the year, while in late winter the flux was very small and fluctuated around zero. However, the magnitude of fluxes was relatively small throughout the year, ranging mainly from -130 to +100 µg CH4 m-2 h-1. CH4 emission peaks were observed occasionally, mostly in summer during heavy rainfall events. Diurnal variation, showing a lower CH4 uptake rate during the daytime, was observed in all of the chambers, mainly in the summer and late spring, particularly in dry conditions. It was attributed more to changes in wind speed than air or soil temperature, which suggest that physical rather than biological phenomena are responsible for the observed variation. The annual net CH4 exchange varied from -104 ± 30 to -505 ± 39 mg CH4 m-2 yr-1 among the six chambers, with an average of -219 mg CH4 m-2 yr-1 over the 2-year measurement period.

  19. Mathematical modeling of drying of pretreated and untreated pumpkin.

    PubMed

    Tunde-Akintunde, T Y; Ogunlakin, G O

    2013-08-01

    In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.

  20. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  1. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-07-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach was justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatland sites in Finland and a tundra site in Siberia. The flux measurements were performed using transparent chambers on vegetated surfaces and opaque chambers on bare peat surfaces. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes and even lower for longer closure times. The degree of underestimation increased with increasing CO2 flux strength and is dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  2. Using instant messaging to enhance the interpersonal relationships of Taiwanese adolescents: evidence from quantile regression analysis.

    PubMed

    Lee, Yueh-Chiang; Sun, Ya Chung

    2009-01-01

    Even though use of the internet by adolescents has grown exponentially, little is known about the correlation between their interaction via Instant Messaging (IM) and the evolution of their interpersonal relationships in real life. In the present study, 369 junior high school students in Taiwan responded to questions regarding their IM usage and their dispositional measures of real-life interpersonal relationships. Descriptive statistics, factor analysis, and quantile regression methods were used to analyze the data. Results indicate that (1) IM helps define adolescents' self-identity (forming and maintaining individual friendships) and social-identity (belonging to a peer group), and (2) how development of an interpersonal relationship is impacted by the use of IM since it appears that adolescents use IM to improve their interpersonal relationships in real life.

  3. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  4. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  5. [Hazard function and life table: an introduction to the failure time analysis].

    PubMed

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  6. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  7. Estimating chlorophyll content of spartina alterniflora at leaf level using hyper-spectral data

    NASA Astrophysics Data System (ADS)

    Wang, Jiapeng; Shi, Runhe; Liu, Pudong; Zhang, Chao; Chen, Maosi

    2017-09-01

    Spartina alterniflora, one of most successful invasive species in the world, was firstly introduced to China in 1979 to accelerate sedimentation and land formation via so-called "ecological engineering", and it is now widely distributed in coastal saltmarshes in China. A key question is how to retrieve chlorophyll content to reflect growth status, which has important implication of potential invasiveness. In this work, an estimation model of chlorophyll content of S. alterniflora was developed based on hyper-spectral data in the Dongtan Wetland, Yangtze Estuary, China. The spectral reflectance of S. alterniflora leaves and their corresponding chlorophyll contents were measured, and then the correlation analysis and regression (i.e., linear, logarithmic, quadratic, power and exponential regression) method were established. The spectral reflectance was transformed and the feature parameters (i.e., "san bian", "lv feng" and "hong gu") were extracted to retrieve the chlorophyll content of S. alterniflora . The results showed that these parameters had a large correlation coefficient with chlorophyll content. On the basis of the correlation coefficient, mathematical models were established, and the models of power and exponential based on SDb had the least RMSE and larger R2 , which had a good performance regarding the inversion of chlorophyll content of S. alterniflora.

  8. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. Regression of altitude-produced cardiac hypertrophy.

    NASA Technical Reports Server (NTRS)

    Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.

    1973-01-01

    The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.

  10. Gas propagation in a liquid helium cooled vacuum tube following a sudden vacuum loss

    NASA Astrophysics Data System (ADS)

    Dhuley, Ram C.

    This dissertation describes the propagation of near atmospheric nitrogen gas that rushes into a liquid helium cooled vacuum tube after the tube suddenly loses vacuum. The loss-of-vacuum scenario resembles accidental venting of atmospheric air to the beam-line of a superconducting radio frequency particle accelerator and is investigated to understand how in the presence of condensation, the in-flowing air will propagate in such geometry. In a series of controlled experiments, room temperature nitrogen gas (a substitute for air) at a variety of mass flow rates was vented to a high vacuum tube immersed in a bath of liquid helium. Pressure probes and thermometers installed on the tube along its length measured respectively the tube pressure and tube wall temperature rise due to gas flooding and condensation. At high mass in-flow rates a gas front propagated down the vacuum tube but with a continuously decreasing speed. Regression analysis of the measured front arrival times indicates that the speed decreases nearly exponentially with the travel length. At low enough mass in-flow rates, no front propagated in the vacuum tube. Instead, the in-flowing gas steadily condensed over a short section of the tube near its entrance and the front appeared to `freeze-out'. An analytical expression is derived for gas front propagation speed in a vacuum tube in the presence of condensation. The analytical model qualitatively explains the front deceleration and flow freeze-out. The model is then simplified and supplemented with condensation heat/mass transfer data to again find the front to decelerate exponentially while going away from the tube entrance. Within the experimental and procedural uncertainty, the exponential decay length-scales obtained from the front arrival time regression and from the simplified model agree.

  11. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  12. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  13. Use and interpretation of logistic regression in habitat-selection studies

    USGS Publications Warehouse

    Keating, Kim A.; Cherry, Steve

    2004-01-01

     Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.

  14. Transient modeling in simulation of hospital operations for emergency response.

    PubMed

    Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li

    2006-01-01

    Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.

  15. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  16. Comparison of bi-exponential and mono-exponential models of diffusion-weighted imaging for detecting active sacroiliitis in ankylosing spondylitis.

    PubMed

    Sun, Haitao; Liu, Kai; Liu, Hao; Ji, Zongfei; Yan, Yan; Jiang, Lindi; Zhou, Jianjun

    2018-04-01

    Background There has been a growing need for a sensitive and effective imaging method for the differentiation of the activity of ankylosing spondylitis (AS). Purpose To compare the performances of intravoxel incoherent motion (IVIM)-derived parameters and the apparent diffusion coefficient (ADC) for distinguishing AS-activity. Material and Methods One hundred patients with AS were divided into active (n = 51) and non-active groups (n = 49) and 21 healthy volunteers were included as control. The ADC, diffusion coefficient ( D), pseudodiffusion coefficient ( D*), and perfusion fraction ( f) were calculated for all groups. Kruskal-Wallis tests and receiver operator characteristic (ROC) curve analysis were performed for all parameters. Results There was good reproducibility of ADC /D and relatively poor reproducibility of D*/f. ADC, D, and f were significantly higher in the active group than in the non-active and control groups (all P < 0.0001, respectively). D* was slightly but significant lower in the active group than in the non-active and control group ( P = 0.0064, 0.0215). There was no significant difference in any parameter between the non-active group and the control group (all P > 0.050). In the ROC analysis, ADC had the largest AUC for distinguishing between the active group and the non-active group (0.988) and between the active and control groups (0.990). Multivariate logistic regression analysis models showed no diagnostic improvement. Conclusion ADC provided better diagnostic performance than IVIM-derived parameters in differentiating AS activity. Therefore, a straightforward and effective mono-exponential model of diffusion-weighted imaging may be sufficient for differentiating AS activity in the clinic.

  17. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  18. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  19. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  20. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  1. Statistical power for detecting trends with applications to seabird monitoring

    USGS Publications Warehouse

    Hatch, Shyla A.

    2003-01-01

    Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.

  2. Occupational injuries in Italy: risk factors and long term trend (1951-98)

    PubMed Central

    Fabiano, B; Curro, F; Pastorino, R

    2001-01-01

    OBJECTIVES—Trends in the rates of total injuries and fatal accidents in the different sectors of Italian industries were explored during the period 1951-98. Causes and dynamics of injury were also studied for setting priorities for improving safety standards.
METHODS—Data on occupational injuries from the National Organisation for Labour Injury Insurance were combined with data from the State Statistics Institute to highlight the interaction between the injury frequency index trend and the production cycle—that is, the evolution of industrial production throughout the years. Multiple regression with log transformed rates was adopted to model the trends of occupational fatalities for each industrial group.
RESULTS—The ratios between the linked indices of injury frequency and industrial production showed a good correlation over the whole period. A general decline in injuries was found across all sectors, with values ranging from 79.86% in the energy group to 23.32% in the textile group. In analysing fatalities, the trend seemed to be more clearly decreasing than the trend of total injuries, including temporary and permanent disabilities; the fatalities showed an exponential decrease according to multiple regression, with an annual decline equal to 4.42%.
CONCLUSIONS—The overall probability of industrial fatal accidents in Italy tended to decrease exponentially by year. The most effective actions in preventing injuries were directed towards fatal accidents. By analysing the rates of fatal accident in the different sectors, appropriate targets and priorities for increased strategies to prevent injuries can be suggested. The analysis of the dynamics and the material causes of injuries showed that still more consideration should be given to human and organisational factors.


Keywords: labour injuries; severity; regression model PMID:11303083

  3. Methods for trend analysis: Examples with problem/failure data

    NASA Technical Reports Server (NTRS)

    Church, Curtis K.

    1989-01-01

    Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.

  4. Exponential stability of impulsive stochastic genetic regulatory networks with time-varying delays and reaction-diffusion

    DOE PAGES

    Cao, Boqiang; Zhang, Qimin; Ye, Ming

    2016-11-29

    We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.

  5. Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients

    PubMed

    Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil

    2018-03-27

    Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License

  6. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  7. On the hardness of high carbon ferrous martensite

    NASA Astrophysics Data System (ADS)

    Mola, J.; Ren, M.

    2018-06-01

    Due to the presence of retained austenite in martensitic steels, especially steels with high carbon concentrations, it is difficult to estimate the hardness of martensite independent of the hardness of the coexisting austenite. In the present work, the hardness of ferrous martensite with carbon concentrations in the range 0.23-1.46 mass-% was estimated by the regression analysis of hardnesses for hardened martensitic-austenitic steels containing various martensite fractions. For a given carbon concentration, the hardness of martensitic-austenitic steels was found to increase exponentially with an increase in the fraction of the martensitic constituent. The hardness of the martensitic constituent was subsequently estimated by the exponential extrapolation of the hardness of phase mixtures to 100 vol.% martensite. For martensite containing 1.46 mass-% carbon, the hardness was estimated to be 1791 HV. This estimate of martensite hardness is significantly higher than the experimental hardness of 822 HV for a phase mixture of 68 vol.% martensite and 32 vol.% austenite. The hardness obtained by exponential extrapolation is also much higher than the hardness of 1104 HV based on the rule of mixtures. The underestimated hardness of high carbon martensite in the presence of austenite is due to the non-linear dependence of hardness on the martensite fraction. The latter is also a common observation in composite materials with a soft matrix and hard reinforcing particles.

  8. Hyperopic photorefractive keratectomy and central islands

    NASA Astrophysics Data System (ADS)

    Gobbi, Pier Giorgio; Carones, Francesco; Morico, Alessandro; Vigo, Luca; Brancato, Rosario

    1998-06-01

    We have evaluated the refractive evolution in patients treated with yhyperopic PRK to assess the extent of the initial overcorrection and the time constant of regression. To this end, the time history of the refractive error (i.e. the difference between achieved and intended refractive correction) has been fitted by means of an exponential statistical model, giving information characterizing the surgical procedure with a direct clinical meaning. Both hyperopic and myopic PRk procedures have been analyzed by this method. The analysis of the fitting model parameters shows that hyperopic PRK patients exhibit a definitely higher initial overcorrection than myopic ones, and a regression time constant which is much longer. A common mechanism is proposed to be responsible for the refractive outcomes in hyperopic treatments and in myopic patients exhibiting significant central islands. The interpretation is in terms of superhydration of the central cornea, and is based on a simple physical model evaluating the amount of centripetal compression in the apical cornea.

  9. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  10. Application and evaluation of forecasting methods for municipal solid waste generation in an Eastern-European city.

    PubMed

    Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius

    2012-01-01

    Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.

  11. Modeling the germination kinetics of clostridium botulinum 56A spores as affected by temperature, pH, and sodium chloride.

    PubMed

    Chea, F P; Chen, Y; Montville, T J; Schaffner, D W

    2000-08-01

    The germination kinetics of proteolytic Clostridium botulinum 56A spores were modeled as a function of temperature (15, 22, 30 degrees C), pH (5.5, 6.0, 6.5), and sodium chloride (0.5, 2.0, 4.0%). Germination in brain heart infusion (BHI) broth was followed with phase-contrast microscopy. Data collected were used to develop the mathematical models. The germination kinetics expressed as cumulated fraction of germinated spores over time at each environmental condition were best described by an exponential distribution. Quadratic polynomial models were developed by regression analysis to describe the exponential parameter (time to 63% germination) (r2 = 0.982) and the germination extent (r2 = 0.867) as a function of temperature, pH, and sodium chloride. Validation experiments in BHI broth (pH: 5.75, 6.25; NaCl: 1.0, 3.0%; temperature: 18, 26 degrees C) confirmed that the model's predictions were within an acceptable range compared to the experimental results and were fail-safe in most cases.

  12. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.

  13. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    DTIC Science & Technology

    2004-07-07

    ARTICLE IN PRESSPhysica A 347 (2005) 268–2880378-4371/$ - doi:10.1016/j Correspo E-mail adwww.elsevier.com/locate/physaNon- Poisson processes : regression...05.40.a; 89.75.k; 02.50.Ey Keywords: Stochastic processes; Non- Poisson processes ; Liouville and Liouville-like equations; Correlation function...which is not legitimate with renewal non- Poisson processes , is a correct property if the deviation from the exponential relaxation is obtained by time

  14. Bayesian Analysis of High Dimensional Classification

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Subhadeep; Liang, Faming

    2009-12-01

    Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.

  15. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  16. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 1; Analysis

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    Extensive slow-crack-growth (SCG) analysis was made using a primary exponential crack-velocity formulation under three widely used load configurations: constant stress rate, constant stress, and cyclic stress. Although the use of the exponential formulation in determining SCG parameters of a material requires somewhat inconvenient numerical procedures, the resulting solutions presented gave almost the same degree of simplicity in both data analysis and experiments as did the power-law formulation. However, the fact that the inert strength of a material should be known in advance to determine the corresponding SCG parameters was a major drawback of the exponential formulation as compared with the power-law formulation.

  17. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis

    PubMed Central

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B.; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain

    2017-01-01

    Abstract Background: The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Results: Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Conclusions: Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. PMID:28327993

  18. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis.

    PubMed

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain; Jelinsky, Scott A

    2017-05-01

    The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. © The Author 2017. Published by Oxford University Press.

  19. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  20. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  1. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  2. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  3. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS

    PubMed Central

    Almquist, Zack W.; Butts, Carter T.

    2015-01-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach. PMID:26120218

  4. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS.

    PubMed

    Almquist, Zack W; Butts, Carter T

    2014-08-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.

  5. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  6. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  7. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  8. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  9. Deterioration of abstract reasoning ability in mild cognitive impairment and Alzheimer's disease: correlation with regional grey matter volume loss revealed by diffeomorphic anatomical registration through exponentiated lie algebra analysis.

    PubMed

    Yoshiura, Takashi; Hiwatashi, Akio; Yamashita, Koji; Ohyagi, Yasumasa; Monji, Akira; Takayama, Yukihisa; Kamano, Norihiro; Kawashima, Toshiro; Kira, Jun-Ichi; Honda, Hiroshi

    2011-02-01

    To determine which brain regions are relevant to deterioration in abstract reasoning as measured by Raven's Colored Progressive Matrices (CPM) in the context of dementia. MR images of 37 consecutive patients including 19 with Alzheimer's disease (AD) and 18 with amnestic mild cognitive impairment (aMCI) were retrospectively analyzed. All patients were administered the CPM. Regional grey matter (GM) volume was evaluated according to the regimens of voxel-based morphometry, during which a non-linear registration algorithm called Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra was employed. Multiple regression analyses were used to map the regions where GM volumes were correlated with CPM scores. The strongest correlation with CPM scores was seen in the left middle frontal gyrus while a region with the largest volume was identified in the left superior temporal gyrus. Significant correlations were seen in 14 additional regions in the bilateral cerebral hemispheres and right cerebellum. Deterioration of abstract reasoning ability in AD and aMCI measured by CPM is related to GM loss in multiple regions, which is in close agreement with the results of previous activation studies.

  10. A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes

    NASA Astrophysics Data System (ADS)

    Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh

    2018-01-01

    Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.

  11. Practical application of cure mixture model for long-term censored survivor data from a withdrawal clinical trial of patients with major depressive disorder.

    PubMed

    Arano, Ichiro; Sugimoto, Tomoyuki; Hamasaki, Toshimitsu; Ohno, Yuko

    2010-04-23

    Survival analysis methods such as the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression (Cox regression) are commonly used to analyze data from randomized withdrawal studies in patients with major depressive disorder. However, unfortunately, such common methods may be inappropriate when a long-term censored relapse-free time appears in data as the methods assume that if complete follow-up were possible for all individuals, each would eventually experience the event of interest. In this paper, to analyse data including such a long-term censored relapse-free time, we discuss a semi-parametric cure regression (Cox cure regression), which combines a logistic formulation for the probability of occurrence of an event with a Cox proportional hazards specification for the time of occurrence of the event. In specifying the treatment's effect on disease-free survival, we consider the fraction of long-term survivors and the risks associated with a relapse of the disease. In addition, we develop a tree-based method for the time to event data to identify groups of patients with differing prognoses (cure survival CART). Although analysis methods typically adapt the log-rank statistic for recursive partitioning procedures, the method applied here used a likelihood ratio (LR) test statistic from a fitting of cure survival regression assuming exponential and Weibull distributions for the latency time of relapse. The method is illustrated using data from a sertraline randomized withdrawal study in patients with major depressive disorder. We concluded that Cox cure regression reveals facts on who may be cured, and how the treatment and other factors effect on the cured incidence and on the relapse time of uncured patients, and that cure survival CART output provides easily understandable and interpretable information, useful both in identifying groups of patients with differing prognoses and in utilizing Cox cure regression models leading to meaningful interpretations.

  12. Viability estimation of pepper seeds using time-resolved photothermal signal characterization

    NASA Astrophysics Data System (ADS)

    Kim, Ghiseok; Kim, Geon-Hee; Lohumi, Santosh; Kang, Jum-Soon; Cho, Byoung-Kwan

    2014-11-01

    We used infrared thermal signal measurement system and photothermal signal and image reconstruction techniques for viability estimation of pepper seeds. Photothermal signals from healthy and aged seeds were measured for seven periods (24, 48, 72, 96, 120, 144, and 168 h) using an infrared camera and analyzed by a regression method. The photothermal signals were regressed using a two-term exponential decay curve with two amplitudes and two time variables (lifetime) as regression coefficients. The regression coefficients of the fitted curve showed significant differences for each seed groups, depending on the aging times. In addition, the viability of a single seed was estimated by imaging of its regression coefficient, which was reconstructed from the measured photothermal signals. The time-resolved photothermal characteristics, along with the regression coefficient images, can be used to discriminate the aged or dead pepper seeds from the healthy seeds.

  13. Factors affecting the dissipation of pharmaceuticals in freshwater sediments.

    PubMed

    Al-Khazrajy, Omar S A; Bergström, Ed; Boxall, Alistair B A

    2018-03-01

    Degradation is one of the key processes governing the impact of pharmaceuticals in the aquatic environment. Most studies on the degradation of pharmaceuticals have focused on soil and sludge, with fewer exploring persistence in aquatic sediments. We investigated the dissipation of 6 pharmaceuticals from different therapeutic classes in a range of sediment types. Dissipation of each pharmaceutical was found to follow first-order exponential decay. Half-lives in the sediments ranged from 9.5 (atenolol) to 78.8 (amitriptyline) d. Under sterile conditions, the persistence of pharmaceuticals was considerably longer. Stepwise multiple linear regression analysis was performed to explore the relationships between half-lives of the pharmaceuticals, sediment physicochemical properties, and sorption coefficients for the compounds. Sediment clay, silt, and organic carbon content and microbial activity were the predominant factors related to the degradation rates of diltiazem, cimetidine, and ranitidine. Regression analysis failed to highlight a key property which may be responsible for observed differences in the degradation of the other pharmaceuticals. The present results suggest that the degradation rate of pharmaceuticals in sediments is determined by different factors and processes and does not exclusively depend on a single sediment parameter. Environ Toxicol Chem 2018;37:829-838. © 2017 SETAC. © 2017 SETAC.

  14. Correlating the stretched-exponential and super-Arrhenius behaviors in the structural relaxation of glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2011-04-20

    Following the report of a single-exponential activation behavior behind the super-Arrhenius structural relaxation of glass-forming liquids in our preceding paper, we find that the non-exponentiality in the structural relaxation of glass-forming liquids is straightforwardly determined by the relaxation time, and could be calculated from the measured relaxation data. Comparisons between the calculated and measured non-exponentialities for typical glass-forming liquids, from fragile to intermediate, convincingly support the present analysis. Hence the origin of the non-exponentiality and its correlation with liquid fragility become clearer.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  16. Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).

    PubMed

    Namiki, C; Katsuragawa, M; Zani-Teixeira, M L

    2015-04-01

    The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.

  17. Kozeny-Carman permeability relationship with disintegration process predicted from early dissolution profiles of immediate release tablets.

    PubMed

    Kumari, Parveen; Rathi, Pooja; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir

    2017-07-01

    This study was oriented toward the disintegration profiling of the diclofenac sodium (DS) immediate-release (IR) tablets and development of its relationship with medium permeability k perm based on Kozeny-Carman equation. Batches (L1-L9) of DS IR tablets with different porosities and specific surface area were prepared at different compression forces and evaluated for porosity, in vitro dissolution and particle-size analysis of the disintegrated mass. The k perm was calculated from porosities and specific surface area, and disintegration profiles were predicted from the dissolution profiles of IR tablets by stripping/residual method. The disintegration profiles were subjected to exponential regression to find out the respective disintegration equations and rate constants k d . Batches L1 and L2 showed the fastest disintegration rates as evident from their bi-exponential equations while the rest of the batches L3-L9 exhibited the first order or mono-exponential disintegration kinetics. The 95% confidence interval (CI 95% ) revealed significant differences between k d values of different batches except L4 and L6. Similar results were also spotted for dissolution profiles of IR tablets by similarity (f 2 ) test. The final relationship between k d and k perm was found to be hyperbolic, signifying the initial effect of k perm on the disintegration rate. The results showed that disintegration profiling is possible because a relationship exists between k d and k perm . The later being relatable with porosity and specific surface area can be determined by nondestructive tests.

  18. Squared exponential covariance function for prediction of hydrocarbon in seabed logging application

    NASA Astrophysics Data System (ADS)

    Mukhtar, Siti Mariam; Daud, Hanita; Dass, Sarat Chandra

    2016-11-01

    Seabed Logging technology (SBL) has progressively emerged as one of the demanding technologies in Exploration and Production (E&P) industry. Hydrocarbon prediction in deep water areas is crucial task for a driller in any oil and gas company as drilling cost is very expensive. Simulation data generated by Computer Software Technology (CST) is used to predict the presence of hydrocarbon where the models replicate real SBL environment. These models indicate that the hydrocarbon filled reservoirs are more resistive than surrounding water filled sediments. Then, as hydrocarbon depth is increased, it is more challenging to differentiate data with and without hydrocarbon. MATLAB is used for data extractions for curve fitting process using Gaussian process (GP). GP can be classified into regression and classification problems, where this work only focuses on Gaussian process regression (GPR) problem. Most popular choice to supervise GPR is squared exponential (SE), as it provides stability and probabilistic prediction in huge amounts of data. Hence, SE is used to predict the presence or absence of hydrocarbon in the reservoir from the data generated.

  19. A secure distributed logistic regression protocol for the detection of rare adverse drug events

    PubMed Central

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-01-01

    Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397

  20. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    PubMed

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.

  1. Correlation of porous and functional properties of food materials by NMR relaxometry and multivariate analysis.

    PubMed

    Haiduc, Adrian Marius; van Duynhoven, John

    2005-02-01

    The porous properties of food materials are known to determine important macroscopic parameters such as water-holding capacity and texture. In conventional approaches, understanding is built from a long process of establishing macrostructure-property relations in a rational manner. Only recently, multivariate approaches were introduced for the same purpose. The model systems used here are oil-in-water emulsions, stabilised by protein, and form complex structures, consisting of fat droplets dispersed in a porous protein phase. NMR time-domain decay curves were recorded for emulsions with varied levels of fat, protein and water. Hardness, dry matter content and water drainage were determined by classical means and analysed for correlation with the NMR data with multivariate techniques. Partial least squares can calibrate and predict these properties directly from the continuous NMR exponential decays and yields regression coefficients higher than 82%. However, the calibration coefficients themselves belong to the continuous exponential domain and do little to explain the connection between NMR data and emulsion properties. Transformation of the NMR decays into a discreet domain with non-negative least squares permits the use of multilinear regression (MLR) on the resulting amplitudes as predictors and hardness or water drainage as responses. The MLR coefficients show that hardness is highly correlated with the components that have T2 distributions of about 20 and 200 ms whereas water drainage is correlated with components that have T2 distributions around 400 and 1800 ms. These T2 distributions very likely correlate with water populations present in pores with different sizes and/or wall mobility. The results for the emulsions studied demonstrate that NMR time-domain decays can be employed to predict properties and to provide insight in the underlying microstructural features.

  2. Follow-up of the air pollution and the human male-to-female ratio analysis in São Paulo, Brazil: a times series study

    PubMed Central

    Miraglia, Simone Georges El Khouri; Veras, Mariana Matera; Amato-Lourenço, Luis Fernando; Rodrigues-Silva, Fernando; Saldiva, Paulo Hilário Nascimento

    2013-01-01

    Objectives In order to assess if ambient air pollution in urban areas could be related to alterations in male/female ratio this study objectives to evaluate changes in ambient particulate matter (PM10) concentrations after implementation of pollution control programmes in São Paulo city and the secondary sex ratio (SRR). Design and methods A time series study was conducted. São Paulo’s districts were stratified according to the PM10 concentrations levels and were used as a marker of overall air pollution. The male ratio was chosen to represent the secondary sex ratio (SSR=total male birth/total births). The SSR data from each area was analysed according to the time variation and PM10 concentration areas using descriptive statistics. The strength association between annual average of PM10 concentration and SSR was performed through exponential regression, and it was adopted as a statistical significance level of p<0.05. Results The exponential regression showed a negative and significant association between PM10 and SSR. SSR varied from 51.4% to 50.7% in São Paulo in the analysed period (2000–2007). Considering the PM10 average concentration in São Paulo city of 44.72 μg/m3 in the study period, the SSR decline reached almost 4.37%, equivalent to 30 934 less male births. Conclusions Ambient levels of PM10 are negatively associated with changes in the SSR. Therefore, we can speculate that higher levels of particulate pollution could be related to increased rates of female births. PMID:23892420

  3. Line transect estimation of population size: the exponential case with grouped data

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1979-01-01

    Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.

  4. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  5. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  6. Relationships between Heavy Metal Concentrations in Roadside Topsoil and Distance to Road Edge Based on Field Observations in the Qinghai-Tibet Plateau, China

    PubMed Central

    Yan, Xuedong; Gao, Dan; Zhang, Fan; Zeng, Chen; Xiang, Wang; Zhang, Man

    2013-01-01

    This study investigated the spatial distribution of copper (Cu), zinc (Zn), cadmium (Cd), lead (Pb), chromium (Cr), cobalt (Co), nickel (Ni) and arsenic (As) in roadside topsoil in the Qinghai-Tibet Plateau and evaluated the potential environmental risks of these roadside heavy metals due to traffic emissions. A total of 120 topsoil samples were collected along five road segments in the Qinghai-Tibet Plateau. The nonlinear regression method was used to formulize the relationship between the metal concentrations in roadside soils and roadside distance. The Hakanson potential ecological risk index method was applied to assess the degrees of heavy metal contaminations. The regression results showed that both of the heavy metals’ concentrations and their ecological risk indices decreased exponentially with the increase of roadside distance. The large R square values of the regression models indicate that the exponential regression method can suitably describe the relationship between heavy metal accumulation and roadside distance. For the entire study region, there was a moderate level of potential ecological risk within a 10 m roadside distance. However, Cd was the only prominent heavy metal which posed potential hazard to the local soil ecosystem. Overall, the rank of risk contribution to the local environments among the eight heavy metals was Cd > As > Ni > Pb > Cu > Co > Zn > Cr. Considering that Cd is a more hazardous heavy metal than other elements for public health, the local government should pay special attention to this traffic-related environmental issue. PMID:23439515

  7. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  8. A Stochastic Super-Exponential Growth Model for Population Dynamics

    NASA Astrophysics Data System (ADS)

    Avila, P.; Rekker, A.

    2010-11-01

    A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.

  9. Water quality trend analysis for the Karoon River in Iran.

    PubMed

    Naddafi, K; Honari, H; Ahmadi, M

    2007-11-01

    The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.

  10. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  11. Dynamic Network Logistic Regression: A Logistic Choice Analysis of Inter- and Intra-Group Blog Citation Dynamics in the 2004 US Presidential Election

    PubMed Central

    2013-01-01

    Methods for analysis of network dynamics have seen great progress in the past decade. This article shows how Dynamic Network Logistic Regression techniques (a special case of the Temporal Exponential Random Graph Models) can be used to implement decision theoretic models for network dynamics in a panel data context. We also provide practical heuristics for model building and assessment. We illustrate the power of these techniques by applying them to a dynamic blog network sampled during the 2004 US presidential election cycle. This is a particularly interesting case because it marks the debut of Internet-based media such as blogs and social networking web sites as institutionally recognized features of the American political landscape. Using a longitudinal sample of all Democratic National Convention/Republican National Convention–designated blog citation networks, we are able to test the influence of various strategic, institutional, and balance-theoretic mechanisms as well as exogenous factors such as seasonality and political events on the propensity of blogs to cite one another over time. Using a combination of deviance-based model selection criteria and simulation-based model adequacy tests, we identify the combination of processes that best characterizes the choice behavior of the contending blogs. PMID:24143060

  12. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  13. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  14. The dependence of PGA and PGV on distance and magnitude inferred from Northern California ShakeMap data

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.

    2003-01-01

    We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.

  15. Fresh and Dry Mass Estimates of Hermetia illucens (Linnaeus, 1758) (Diptera: Stratiomyidae) Larvae Associated with Swine Decomposition in Urban Area of Central Amazonia.

    PubMed

    Barros, L M; Martins, R T; Ferreira-Keppler, R L; Gutjahr, A L N

    2017-08-04

    Information on biomass is substantial for calculating growth rates and may be employed in the medicolegal and economic importance of Hermetia illucens (Linnaeus, 1758). Although biomass is essential to understanding many ecological processes, it is not easily measured. Biomass may be determined by directly weighing or indirectly through regression models of fresh/dry mass versus body dimensions. In this study, we evaluated the association between morphometry and fresh/dry mass of immature H. illucens using linear, exponential, and power regression models. We measured width and length of the cephalic capsule, overall body length, and width of the largest abdominal segment of 280 larvae. Overall body length and width of the largest abdominal segment were the best predictors for biomass. Exponential models best fitted body dimensions and biomass (both fresh and dry), followed by power and linear models. In all models, fresh and dry biomass were strongly correlated (>75%). Values estimated by the models did not differ from observed ones, and prediction power varied from 27 to 79%. Accordingly, the correspondence between biomass and body dimensions should facilitate and motivate the development of applied studies involving H. illucens in the Amazon region.

  16. Regression model analysis of the decreasing trend of cesium-137 concentration in the atmosphere since the Fukushima accident.

    PubMed

    Kitayama, Kyo; Ohse, Kenji; Shima, Nagayoshi; Kawatsu, Kencho; Tsukada, Hirofumi

    2016-11-01

    The decreasing trend of the atmospheric 137 Cs concentration in two cities in Fukushima prefecture was analyzed by a regression model to clarify the relation between the parameter of the decrease in the model and the trend and to compare the trend with that after the Chernobyl accident. The 137 Cs particle concentration measurements were conducted in urban Fukushima and rural Date sites from September 2012 to June 2015. The 137 Cs particle concentrations were separated in two groups: particles of more than 1.1 μm aerodynamic diameters (coarse particles) and particles with aerodynamic diameter lower than 1.1 μm (fine particles). The averages of the measured concentrations were 0.1 mBq m -3 in Fukushima and Date sites. The measured concentrations were applied in the regression model which decomposed them into two components: trend and seasonal variation. The trend concentration included the parameters for the constant and the exponential decrease. The parameter for the constant was slightly different between the Fukushima and Date sites. The parameter for the exponential decrease was similar for all the cases, and much higher than the value of the physical radioactive decay except for the concentration in the fine particles at the Date site. The annual decreasing rates of the 137 Cs concentration evaluated by the trend concentration ranged from 44 to 53% y -1 with average and standard deviation of 49 ± 8% y -1 for all the cases in 2013. In the other years, the decreasing rates also varied slightly for all cases. These indicated that the decreasing trend of the 137 Cs concentration was nearly unchanged for the location and ground contamination level in the three years after the accident. The 137 Cs activity per aerosol particle mass also decreased with the same trend as the 137 Cs concentration in the atmosphere. The results indicated that the decreasing trend of the atmospheric 137 Cs concentration was related with the reduction of the 137 Cs concentration in resuspended particles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Mid-Infrared Lifetime Imaging for Viability Evaluation of Lettuce Seeds Based on Time-Dependent Thermal Decay Characterization

    PubMed Central

    Kim, Ghiseok; Kim, Geon Hee; Ahn, Chi-Kook; Yoo, Yoonkyu; Cho, Byoung-Kwan

    2013-01-01

    An infrared lifetime thermal imaging technique for the measurement of lettuce seed viability was evaluated. Thermal emission signals from mid-infrared images of healthy seeds and seeds aged for 24, 48, and 72 h were obtained and reconstructed using regression analysis. The emission signals were fitted with a two-term exponential model that had two amplitudes and two time variables as lifetime parameters. The lifetime thermal decay parameters were significantly different for seeds with different aging times. Single-seed viability was visualized using thermal lifetime images constructed from the calculated lifetime parameter values. The time-dependent thermal signal decay characteristics, along with the decay amplitude and delay time images, can be used to distinguish aged lettuce seeds from normal seeds. PMID:23529120

  18. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  19. Confronting quasi-exponential inflation with WMAP seven

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in

    2012-04-01

    We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.

  20. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    PubMed

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  1. The effect of pore size and porosity on mechanical properties and biological response of porous titanium scaffolds.

    PubMed

    Torres-Sanchez, C; Al Mushref, F R A; Norrito, M; Yendall, K; Liu, Y; Conway, P P

    2017-08-01

    The effect of pore size and porosity on elastic modulus, strength, cell attachment and cell proliferation was studied for Ti porous scaffolds manufactured via powder metallurgy and sintering. Porous scaffolds were prepared in two ranges of porosities so that their mechanical properties could mimic those of cortical and trabecular bone respectively. Space-holder engineered pore size distributions were carefully determined to study the impact that small changes in pore size may have on mechanical and biological behaviour. The Young's moduli and compressive strengths were correlated with the relative porosity. Linear, power and exponential regressions were studied to confirm the predictability in the characterisation of the manufactured scaffolds and therefore establish them as a design tool for customisation of devices to suit patients' needs. The correlations were stronger for the linear and the power law regressions and poor for the exponential regressions. The optimal pore microarchitecture (i.e. pore size and porosity) for scaffolds to be used in bone grafting for cortical bone was set to <212μm with volumetric porosity values of 27-37%, and for trabecular tissues to 300-500μm with volumetric porosity values of 54-58%. The pore size range 212-300μm with volumetric porosity values of 38-56% was reported as the least favourable to cell proliferation in the longitudinal study of 12days of incubation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Bi-exponential T2 analysis of healthy and diseased Achilles tendons: an in vivo preliminary magnetic resonance study and correlation with clinical score.

    PubMed

    Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried

    2013-10-01

    To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.

  3. Breast lesion characterization using whole-lesion histogram analysis with stretched-exponential diffusion model.

    PubMed

    Liu, Chunling; Wang, Kun; Li, Xiaodan; Zhang, Jine; Ding, Jie; Spuhler, Karl; Duong, Timothy; Liang, Changhong; Huang, Chuan

    2018-06-01

    Diffusion-weighted imaging (DWI) has been studied in breast imaging and can provide more information about diffusion, perfusion and other physiological interests than standard pulse sequences. The stretched-exponential model has previously been shown to be more reliable than conventional DWI techniques, but different diagnostic sensitivities were found from study to study. This work investigated the characteristics of whole-lesion histogram parameters derived from the stretched-exponential diffusion model for benign and malignant breast lesions, compared them with conventional apparent diffusion coefficient (ADC), and further determined which histogram metrics can be best used to differentiate malignant from benign lesions. This was a prospective study. Seventy females were included in the study. Multi-b value DWI was performed on a 1.5T scanner. Histogram parameters of whole lesions for distributed diffusion coefficient (DDC), heterogeneity index (α), and ADC were calculated by two radiologists and compared among benign lesions, ductal carcinoma in situ (DCIS), and invasive carcinoma confirmed by pathology. Nonparametric tests were performed for comparisons among invasive carcinoma, DCIS, and benign lesions. Comparisons of receiver operating characteristic (ROC) curves were performed to show the ability to discriminate malignant from benign lesions. The majority of histogram parameters (mean/min/max, skewness/kurtosis, 10-90 th percentile values) from DDC, α, and ADC were significantly different among invasive carcinoma, DCIS, and benign lesions. DDC 10% (area under curve [AUC] = 0.931), ADC 10% (AUC = 0.893), and α mean (AUC = 0.787) were found to be the best metrics in differentiating benign from malignant tumors among all histogram parameters derived from ADC and α, respectively. The combination of DDC 10% and α mean , using logistic regression, yielded the highest sensitivity (90.2%) and specificity (95.5%). DDC 10% and α mean derived from the stretched-exponential model provides more information and better diagnostic performance in differentiating malignancy from benign lesions than ADC parameters derived from a monoexponential model. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1701-1710. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Regression Models For Multivariate Count Data

    PubMed Central

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2016-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data. PMID:28348500

  5. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  6. Regression Models For Multivariate Count Data.

    PubMed

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  7. Effect of water-based recovery on blood lactate removal after high-intensity exercise.

    PubMed

    Lucertini, Francesco; Gervasi, Marco; D'Amen, Giancarlo; Sisti, Davide; Rocchi, Marco Bruno Luigi; Stocchi, Vilberto; Benelli, Piero

    2017-01-01

    This study assessed the effectiveness of water immersion to the shoulders in enhancing blood lactate removal during active and passive recovery after short-duration high-intensity exercise. Seventeen cyclists underwent active water- and land-based recoveries and passive water and land-based recoveries. The recovery conditions lasted 31 minutes each and started after the identification of each cyclist's blood lactate accumulation peak, induced by a 30-second all-out sprint on a cycle ergometer. Active recoveries were performed on a cycle ergometer at 70% of the oxygen consumption corresponding to the lactate threshold (the control for the intensity was oxygen consumption), while passive recoveries were performed with subjects at rest and seated on the cycle ergometer. Blood lactate concentration was measured 8 times during each recovery condition and lactate clearance was modeled over a negative exponential function using non-linear regression. Actual active recovery intensity was compared to the target intensity (one sample t-test) and passive recovery intensities were compared between environments (paired sample t-tests). Non-linear regression parameters (coefficients of the exponential decay of lactate; predicted resting lactates; predicted delta decreases in lactate) were compared between environments (linear mixed model analyses for repeated measures) separately for the active and passive recovery modes. Active recovery intensities did not differ significantly from the target oxygen consumption, whereas passive recovery resulted in a slightly lower oxygen consumption when performed while immersed in water rather than on land. The exponential decay of blood lactate was not significantly different in water- or land-based recoveries in either active or passive recovery conditions. In conclusion, water immersion at 29°C would not appear to be an effective practice for improving post-exercise lactate removal in either the active or passive recovery modes.

  8. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  9. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  10. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  11. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  13. The Exponential Expansion of Simulation in Research

    DTIC Science & Technology

    2012-12-01

    exponential growth of computing power. Although other analytic approaches also benefit from this trend, keyword searches of several scholarly search ... engines reveal that the reliance on simulation is increasing more rapidly. A descriptive analysis paints a compelling picture: simulation is frequently

  14. The Analysis of Fluorescence Decay by a Method of Moments

    PubMed Central

    Isenberg, Irvin; Dyson, Robert D.

    1969-01-01

    The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139

  15. Measurement of cellular copper levels in Bacillus megaterium during exponential growth and sporulation.

    PubMed

    Krueger, W B; Kolodziej, B J

    1976-01-01

    Both atomic absorption spectrophotometry (AAS) and neutron activation analysis have been utilized to determine cellular Cu levels in Bacillus megaterium ATCC 19213. Both methods were selected for their sensitivity to detection of nanogram quantities of Cu. Data from both methods demonstrated identical patterms of Cu uptake during exponenetial growth and sporulation. Late exponential phase cells contained less Cu than postexponential t2 cells while t5 cells contained amounts equivalent to exponential cells. The t11 phase-bright forespore containing cells had a higher Cu content than those of earlier time periods, and the free spores had the highest Cu content. Analysis of the culture medium by AAS corroborated these data by showing concomitant Cu uptake during exponential growth and into t2 postexponential phase of sporulation. From t2 to t4, Cu egressed from the cells followed by a secondary uptake during the maturation of phase-dark forespores into phase-bright forespores (t6--t9).

  16. Relationship between aging and T1 relaxation time in deep gray matter: A voxel-based analysis.

    PubMed

    Okubo, Gosuke; Okada, Tomohisa; Yamamoto, Akira; Fushimi, Yasutaka; Okada, Tsutomu; Murata, Katsutoshi; Togashi, Kaori

    2017-09-01

    To investigate age-related changes in T 1 relaxation time in deep gray matter structures in healthy volunteers using magnetization-prepared 2 rapid acquisition gradient echoes (MP2RAGE). In all, 70 healthy volunteers (aged 20-76, mean age 42.6 years) were scanned at 3T magnetic resonance imaging (MRI). A MP2RAGE sequence was employed to quantify T 1 relaxation times. After the spatial normalization of T 1 maps with the diffeomorphic anatomical registration using the exponentiated Lie algebra algorithm, voxel-based regression analysis was conducted. In addition, linear and quadratic regression analyses of regions of interest (ROIs) were also performed. With aging, voxel-based analysis (VBA) revealed significant T 1 value decreases in the ventral-inferior putamen, nucleus accumbens, and amygdala, whereas T 1 values significantly increased in the thalamus and white matter as well (P < 0.05 at cluster level, false discovery rate). ROI analysis revealed that T 1 values in the nucleus accumbens linearly decreased with aging (P = 0.0016), supporting the VBA result. T 1 values in the thalamus (P < 0.0001), substantia nigra (P = 0.0003), and globus pallidus (P < 0.0001) had a best fit to quadratic curves, with the minimum T 1 values observed between 30 and 50 years of age. Age-related changes in T 1 relaxation time vary by location in deep gray matter. 2 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;46:724-731. © 2017 International Society for Magnetic Resonance in Medicine.

  17. An investigation on the relationship among marbling features, physiological age and Warner-Bratzler Shear force of steer longissimus dorsi muscle.

    PubMed

    Luo, Lingying; Guo, Dandan; Zhou, Guanghong; Chen, Kunjie

    2018-04-01

    Researchers nowadays have paid much attention to the relationships between tenderness and marbling, or physiological age. While the marbling was mainly evaluated qualitatively with scores or grades, and rarely related with physiological age. Present study was carried out to analyze the marbling features of longissimus dorsi muscle between the 12th and 13th ribs from 18, 36, 54 and 72 months old Simmental steers were quantitatively described with area and perimeter using computer vision technique. Relationship between Warner-Bratzler Shear force (WBSF), physiological age and the marbling features were examined performing regression analysis. The results revealed that WBSF positively correlated with physiological age, but negatively with marbling area and perimeter. Regression analysis showed that the relationship between the shear force and the steers' age was more close to the quadratic curve (R 2  = 0.996) and exponential curve (R 2  = 0.957). It was observed during study that marbling grew with steers age. Marbling features were in linear correlation with the steers' age, with R 2  = 0.927 for marbling area and R 2  = 0.935 for marbling perimeter. The industries in future may speculate beef tenderness and physiological age based on the marbling features (area and perimeter), which can be determined through the online image acquisition system and image processing.

  18. Safety evaluation model of urban cross-river tunnel based on driving simulation.

    PubMed

    Ma, Yingqi; Lu, Linjun; Lu, Jian John

    2017-09-01

    Currently, Shanghai urban cross-river tunnels have three principal characteristics: increased traffic, a high accident rate and rapidly developing construction. Because of their complex geographic and hydrological characteristics, the alignment conditions in urban cross-river tunnels are more complicated than in highway tunnels, so a safety evaluation of urban cross-river tunnels is necessary to suggest follow-up construction and changes in operational management. A driving risk index (DRI) for urban cross-river tunnels was proposed in this study. An index system was also constructed, combining eight factors derived from the output of a driving simulator regarding three aspects of risk due to following, lateral accidents and driver workload. Analytic hierarchy process methods and expert marking and normalization processing were applied to construct a mathematical model for the DRI. The driving simulator was used to simulate 12 Shanghai urban cross-river tunnels and a relationship was obtained between the DRI for the tunnels and the corresponding accident rate (AR) via a regression analysis. The regression analysis results showed that the relationship between the DRI and the AR mapped to an exponential function with a high degree of fit. In the absence of detailed accident data, a safety evaluation model based on factors derived from a driving simulation can effectively assess the driving risk in urban cross-river tunnels constructed or in design.

  19. The Exponential Expansion of Simulation: How Simulation has Grown as a Research Tool

    DTIC Science & Technology

    2012-09-01

    exponential growth of computing power. Although other analytic approaches also benefit from this trend, keyword searches of several scholarly search ... engines reveal that the reliance on simulation is increasing more rapidly. A descriptive analysis paints a compelling picture: simulation is frequently

  20. On the Time-Dependent Analysis of Gamow Decay

    ERIC Educational Resources Information Center

    Durr, Detlef; Grummt, Robert; Kolb, Martin

    2011-01-01

    Gamow's explanation of the exponential decay law uses complex "eigenvalues" and exponentially growing "eigenfunctions". This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any…

  1. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  2. Historical remarks on exponential product and quantum analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Masuo

    2015-03-10

    The exponential product formula [1, 2] was substantially introduced in physics by the present author [2]. Its systematic applications to quantum Monte Carlo Methods [3] were preformed [4, 5] first in 1977. Many interesting applications [6] of the quantum-classical correspondence (namely S-T transformation) have been reported. Systematic higher-order decomposition formulae were also discovered by the present author [7-11], using the recursion scheme [7, 9]. Physically speaking, these exponential product formulae play a conceptual role of separation of procedures [3,14]. Mathematical aspects of these formulae have been integrated in quantum analysis [15], in which non-commutative differential calculus is formulated and amore » general quantum Taylor expansion formula is given. This yields many useful operator expansion formulae such as the Feynman expansion formula and the resolvent expansion. Irreversibility and entropy production are also studied using quantum analysis [15].« less

  3. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  4. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  5. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  6. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    NASA Astrophysics Data System (ADS)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  7. Research on the exponential growth effect on network topology: Theoretical and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; You, Zongjun

    Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.

  8. Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential

    NASA Astrophysics Data System (ADS)

    Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola

    2017-01-01

    We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.

  9. The true quantum face of the "exponential" decay: Unstable systems in rest and in motion

    NASA Astrophysics Data System (ADS)

    Urbanowski, K.

    2017-12-01

    Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.

  10. Differential blood flow responses to CO2 in human internal and external carotid and vertebral arteries

    PubMed Central

    Sato, Kohei; Sadamoto, Tomoko; Hirasawa, Ai; Oue, Anna; Subudhi, Andrew W; Miyazawa, Taiki; Ogoh, Shigehiko

    2012-01-01

    Arterial CO2 serves as a mediator of cerebral blood flow (CBF), and its relative influence on the regulation of CBF is defined as cerebral CO2 reactivity. Our previous studies have demonstrated that there are differences in CBF responses to physiological stimuli (i.e. dynamic exercise and orthostatic stress) between arteries in humans. These findings suggest that dynamic CBF regulation and cerebral CO2 reactivity may be different in the anterior and posterior cerebral circulation. The aim of this study was to identify cerebral CO2 reactivity by measuring blood flow and examine potential differences in CO2 reactivity between the internal carotid artery (ICA), external carotid artery (ECA) and vertebral artery (VA). In 10 healthy young subjects, we evaluated the ICA, ECA, and VA blood flow responses by duplex ultrasonography (Vivid-e, GE Healthcare), and mean blood flow velocity in middle cerebral artery (MCA) and basilar artery (BA) by transcranial Doppler (Vivid-7, GE healthcare) during two levels of hypercapnia (3% and 6% CO2), normocapnia and hypocapnia to estimate CO2 reactivity. To characterize cerebrovascular reactivity to CO2, we used both exponential and linear regression analysis between CBF and estimated partial pressure of arterial CO2, calculated by end-tidal partial pressure of CO2. CO2 reactivity in VA was significantly lower than in ICA (coefficient of exponential regression 0.021 ± 0.008 vs. 0.030 ± 0.008; slope of linear regression 2.11 ± 0.84 vs. 3.18 ± 1.09% mmHg−1: VA vs. ICA, P < 0.01). Lower CO2 reactivity in the posterior cerebral circulation was persistent in distal intracranial arteries (exponent 0.023 ± 0.006 vs. 0.037 ± 0.009; linear 2.29 ± 0.56 vs. 3.31 ± 0.87% mmHg−1: BA vs. MCA). In contrast, CO2 reactivity in ECA was markedly lower than in the intra-cerebral circulation (exponent 0.006 ± 0.007; linear 0.63 ± 0.64% mmHg−1, P < 0.01). These findings indicate that vertebro-basilar circulation has lower CO2 reactivity than internal carotid circulation, and that CO2 reactivity of the external carotid circulation is markedly diminished compared to that of the cerebral circulation, which may explain different CBF responses to physiological stress. PMID:22526884

  11. Elimination kinetics of metals after an accidental exposure to welding fumes.

    PubMed

    Schaller, Karl H; Csanady, György; Filser, Johannes; Jüngert, Barbara; Drexler, Hans

    2007-07-01

    We had the opportunity to study the kinetics of metals in blood and urine samples of a flame-sprayer exposed to high accident-prone workplace exposure. We measured over 1 year, the nickel, aluminium, and chromium concentrations in blood and urine specimens after exposure. On this basis, we evaluated the corresponding half-lives. Blood and urine sampling were carried out five times after accidental exposure over a period of 1 year. The metals were analysed by graphite furnace atomic absorption spectrometry and Zeeman compensation with reliable methods. Either a mono-exponential or a bi-exponential function was fitted to the concentration-time courses of selected metals using weighted least squares non-linear regression analysis. The amount excreted in urine was calculated integrating the urinary decay curve and multiplying with the daily creatinine excretion. The first examination was carried out 15 days after exposure. The mean aluminium concentration in plasma was 8.2 microg/l and in urine, 58.4 microg/g creatinine. The mean nickel concentration in blood was 59.6 microg/l and the excretion in urine 700 microg/g creatinine. The mean chromium level in blood was 1.4 microg/l in urine, 7.4 microg/g creatinine. For the three elements, the metal concentrations in blood and urine exceeded the reference values at least in the initial phase. For nickel, the German biological threshold limit values (EKA) were exceeded. Aluminium showed a mono-exponential decay, whereas the elimination of chromium and nickel was biphasic in biological fluids of the accidentally exposed welder. The half-lives were as follows: for aluminium 140 days (urine) and 160 days (plasma); for chromium 40 and 730 days (urine); for nickel 25 and 610 days (urine) as well as 30 and 240 days (blood). The renal clearance of aluminium and nickel was about 2 l/h estimated for the last monitoring day.

  12. Evapotranspiration Measurement and Crop Coefficient Estimation over a Spring Wheat Farmland Ecosystem in the Loess Plateau

    PubMed Central

    Yang, Fulin; Zhang, Qiang; Wang, Runyuan; Zhou, Jing

    2014-01-01

    Evapotranspiration (ET) is an important component of the surface energy balance and hydrological cycle. In this study, the eddy covariance technique was used to measure ET of the semi-arid farmland ecosystem in the Loess Plateau during 2010 growing season (April to September). The characteristics and environmental regulations of ET and crop coefficient (Kc) were investigated. The results showed that the diurnal variation of latent heat flux (LE) was similar to single-peak shape for each month, with the largest peak value of LE occurring in August (151.4 W m−2). The daily ET rate of the semi-arid farmland in the Loess Plateau also showed clear seasonal variation, with the maximum daily ET rate of 4.69 mm day−1. Cumulative ET during 2010 growing season was 252.4 mm, and lower than precipitation. Radiation was the main driver of farmland ET in the Loess Plateau, which explained 88% of the variances in daily ET (p<0.001). The farmland Kc values showed the obvious seasonal fluctuation, with the average of 0.46. The correlation analysis between daily Kc and its major environmental factors indicated that wind speed (Ws), relative humidity (RH), soil water content (SWC), and atmospheric vapor pressure deficit (VPD) were the major environmental regulations of daily Kc. The regression analysis results showed that Kc exponentially decreased with Ws increase, an exponentially increased with RH, SWC increase, and a linearly decreased with VPD increase. An experiential Kc model for the semi-arid farmland in the Loess Plateau, driven by Ws, RH, SWC and VPD, was developed, showing a good consistency between the simulated and the measured Kc values. PMID:24941017

  13. Evapotranspiration measurement and crop coefficient estimation over a spring wheat Farmland ecosystem in the Loess Plateau.

    PubMed

    Yang, Fulin; Zhang, Qiang; Wang, Runyuan; Zhou, Jing

    2014-01-01

    Evapotranspiration (ET) is an important component of the surface energy balance and hydrological cycle. In this study, the eddy covariance technique was used to measure ET of the semi-arid farmland ecosystem in the Loess Plateau during 2010 growing season (April to September). The characteristics and environmental regulations of ET and crop coefficient (Kc) were investigated. The results showed that the diurnal variation of latent heat flux (LE) was similar to single-peak shape for each month, with the largest peak value of LE occurring in August (151.4 W m(-2)). The daily ET rate of the semi-arid farmland in the Loess Plateau also showed clear seasonal variation, with the maximum daily ET rate of 4.69 mm day(-1). Cumulative ET during 2010 growing season was 252.4 mm, and lower than precipitation. Radiation was the main driver of farmland ET in the Loess Plateau, which explained 88% of the variances in daily ET (p<0.001). The farmland Kc values showed the obvious seasonal fluctuation, with the average of 0.46. The correlation analysis between daily Kc and its major environmental factors indicated that wind speed (Ws), relative humidity (RH), soil water content (SWC), and atmospheric vapor pressure deficit (VPD) were the major environmental regulations of daily Kc. The regression analysis results showed that Kc exponentially decreased with Ws increase, an exponentially increased with RH, SWC increase, and a linearly decreased with VPD increase. An experiential Kc model for the semi-arid farmland in the Loess Plateau, driven by Ws, RH, SWC and VPD, was developed, showing a good consistency between the simulated and the measured Kc values.

  14. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    USDA-ARS?s Scientific Manuscript database

    his study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA’s long lasting AMSR-E mission. Additionally three other products we...

  15. A Parametric Model for Barred Equilibrium Beach Profiles

    DTIC Science & Technology

    2014-05-10

    to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach

  16. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  17. A modified exponential behavioral economic demand model to better describe consumption data.

    PubMed

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  18. Exploring the dynamics of fluorescence staining of bacteria with cyanine dyes for the development of kinetic assays

    NASA Astrophysics Data System (ADS)

    Thomas, Marlon Sheldon

    Bacterial infections continue to be one of the major health risks in the United States. The common occurrence of such infection is one of the major contributors to the high cost of health care and significant patient mortality. The work presented in this thesis describes spectroscopic studies that will contribute to the development of a fluorescent assay that may allow the rapid identification of bacterial species. Herein, the optical interactions between six bacterial species and a series of thiacyanine dyes are investigated. The interactions between the dyes and the bacterial species are hypothesized to be species-specific. For this thesis, two Gram-negative strains, Escherichia coli (E. coli) TOP10 and Enterobacter aerogenes; two Gram-positive bacterial strains, Bacillus sphaericus and Bacillus subtilis; and two Bacillus endospores, B. globigii and B. thuringiensis, were used to test the proposed hypothesis. A series of three thiacyanine dyes---3,3'-diethylthiacyanine iodide (THIA), 3,3'-diethylthiacarbocyanine iodide (THC) and thiazole orange (THO)---were used as fluorescent probes. The basis of our spectroscopic study was to explore the bacterium-induced interactions of the bacterial cells with the individual thiacyanine dyes or with a mixture of the three dyes. Steady-state absorption spectroscopy revealed that the different bacterial species altered the absorption properties of the dyes. Mixed-dye solutions gave unique absorption patterns for each bacteria tested, with competitive binding observed between the bacteria and spectrophotometric probes (thiacyanine dyes). Emission spectroscopy recorded changes in the emission spectra of THIA following the introduction of bacterial cells. Experimental results revealed that the emission enhancement of the dyes resulted from increases in the emission quantum yield of the thiacyanine dyes upon binding to the bacteria cellular components. The recorded emission enhancement data were fitted to an exponential (mono-exponential or bi-exponential) function, and time constants were extracted by regressing on the experimental data. The addition of the TWEEN surfactants decreased the rate at which the dyes interacted with the bacterial cells, which typically resulted in larger time constants derived from an exponential fit. ANOVA analysis of the time constants confirmed that the values of the time constants clustered in a narrow range and were independent of dye concentration and weakly dependent on cell density.

  19. Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis

    NASA Astrophysics Data System (ADS)

    Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.

    2013-04-01

    We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.

  20. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  1. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  2. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  3. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  4. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  5. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 2; Constant Stress Rate Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  6. A nanostructured surface increases friction exponentially at the solid-gas interface.

    PubMed

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E; Prashanthi, Kovur; Thundat, Thomas

    2016-09-06

    According to Stokes' law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  7. A nanostructured surface increases friction exponentially at the solid-gas interface

    NASA Astrophysics Data System (ADS)

    Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E.; Prashanthi, Kovur; Thundat, Thomas

    2016-09-01

    According to Stokes’ law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.

  8. Assessing the effects of subject motion on T2 relaxation under spin tagging (TRUST) cerebral oxygenation measurements using volume navigators.

    PubMed

    Stout, Jeffrey N; Tisdall, M Dylan; McDaniel, Patrick; Gagoski, Borjan; Bolar, Divya S; Grant, Patricia Ellen; Adalsteinsson, Elfar

    2017-12-01

    Subject motion may cause errors in estimates of blood T 2 when using the T 2 -relaxation under spin tagging (TRUST) technique on noncompliant subjects like neonates. By incorporating 3D volume navigators (vNavs) into the TRUST pulse sequence, independent measurements of motion during scanning permit evaluation of these errors. The effects of integrated vNavs on TRUST-based T 2 estimates were evaluated using simulations and in vivo subject data. Two subjects were scanned with the TRUST+vNav sequence during prescribed movements. Mean motion scores were derived from vNavs and TRUST images, along with a metric of exponential fit quality. Regression analysis was performed between T 2 estimates and mean motion scores. Also, motion scores were determined from independent neonatal scans. vNavs negligibly affected venous blood T 2 estimates and better detected subject motion than fit quality metrics. Regression analysis showed that T 2 is biased upward by 4.1 ms per 1 mm of mean motion score. During neonatal scans, mean motion scores of 0.6 to 2.0 mm were detected. Motion during TRUST causes an overestimate of T 2 , which suggests a cautious approach when comparing TRUST-based cerebral oxygenation measurements of noncompliant subjects. Magn Reson Med 78:2283-2289, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Effects of intracerebroventricular administration of beta-amyloid on the dynamics of learning in purebred and mongrel rats.

    PubMed

    Stepanov, I I; Kuznetsova, N N; Klement'ev, B I; Sapronov, N S

    2007-07-01

    The effects of intracerebroventricular administration of the beta-amyloid peptide fragment Abeta(25-35) on the dynamics of the acquisition of a conditioned reflex in a Y maze were studied in Wistar and mongrel rats. The dynamics of decreases in the number of errors were assessed using an exponential mathematical model describing the transfer function of a first-order system in response to stepped inputs using non-linear regression analysis. This mathematical model provided a good approximation to the learning dynamics in inbred and mongrel mice. In Wistar rats, beta-amyloid impaired learning, with reduced memory between the first and second training sessions, but without complete blockade of learning. As a result, learning dynamics were no longer approximated by the mathematical model. At the same time, comparison of the number of errors in each training sessions between the control group of Wistar rats and the group given beta-amyloid showed no significant differences (Student's t test). This result demonstrates the advantage of regression analysis based on a mathematical model over the traditionally used statistical methods. In mongrel rats, the effect of beta-amyloid was limited to an a slowing of the process of learning as compared with control mongrel rats, with retention of the approximation by the mathematical model. It is suggested that mongrel animals have some kind of innate, genetically determined protective mechanism against the harmful effects of beta-amyloid.

  10. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  11. Roosting habitat use and selection by northern spotted owls during natal dispersal

    USGS Publications Warehouse

    Sovern, Stan G.; Forsman, Eric D.; Dugger, Catherine M.; Taylor, Margaret

    2015-01-01

    We studied habitat selection by northern spotted owls (Strix occidentalis caurina) during natal dispersal in Washington State, USA, at both the roost site and landscape scales. We used logistic regression to obtain parameters for an exponential resource selection function based on vegetation attributes in roost and random plots in 76 forest stands that were used for roosting. We used a similar analysis to evaluate selection of landscape habitat attributes based on 301 radio-telemetry relocations and random points within our study area. We found no evidence of within-stand selection for any of the variables examined, but 78% of roosts were in stands with at least some large (>50 cm dbh) trees. At the landscape scale, owls selected for stands with high canopy cover (>70%). Dispersing owls selected vegetation types that were more similar to habitat selected by adult owls than habitat that would result from following guidelines previously proposed to maintain dispersal habitat. Our analysis indicates that juvenile owls select stands for roosting that have greater canopy cover than is recommended in current agency guidelines.

  12. Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿

    PubMed Central

    Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix

    2009-01-01

    Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400

  13. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  14. fRMSDPred: Predicting Local RMSD Between Structural Fragments Using Sequence Information

    DTIC Science & Technology

    2007-04-04

    machine learning approaches for estimating the RMSD value of a pair of protein fragments. These estimated fragment-level RMSD values can be used to construct the alignment, assess the quality of an alignment, and identify high-quality alignment segments. We present algorithms to solve this fragment-level RMSD prediction problem using a supervised learning framework based on support vector regression and classification that incorporates protein profiles, predicted secondary structure, effective information encoding schemes, and novel second-order pairwise exponential kernel

  15. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    Hassanein (1968, 1969a, 1969b, 1971, 1972, 1977), Kulldorf (1963), Kulldorf and Vannman (1973), Rhodin (1976), Sarhan and Greenberg (1958, 1962) and...of d0 and Q0 1 d 0 "Q0 ’ are in the reproducing kernel Hilbert space (RKHS) generated by R, the techniques developed by Parzen (1961a, 1961b) may be... Greenberg , B.G. (1958). Estimation problems in the exponential distribution using order statistics. Proceedings of the Statistical Techniques in Missile

  16. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjeev, E-mail: sanjeevsharma145@gmail.com; Kumar, Rajendra, E-mail: khundrakpam-ss@yahoo.com; Singh, Kh. S., E-mail: khundrakpam-ss@yahoo.com

    A simple design of broadband one dimensional dielectric/semiconductor multilayer structure having refractive index profile of exponentially graded material has been proposed. The theoretical analysis shows that the proposed structure works as a perfect mirror within a certain wavelength range (1550 nm). In order to calculate the reflection properties a transfer matrix method (TMM) has been used. This property shows that binary graded photonic crystal structures have widened omnidirectional reflector (ODR) bandgap. Hence a exponentially graded photonic crystal structure can be used as a broadband optical reflector and the range of reflection can be tuned to any wavelength region by varying themore » refractive index profile of exponentially graded photonic crystal structure.« less

  18. New exponential synchronization criteria for time-varying delayed neural networks with discontinuous activations.

    PubMed

    Cai, Zuowei; Huang, Lihong; Zhang, Lingling

    2015-05-01

    This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Determination of relationship between sensory viscosity rating and instrumental flow behaviour of soluble dietary fibers.

    PubMed

    Arora, Simran Kaur; Patel, A A; Kumar, Naveen; Chauhan, O P

    2016-04-01

    The shear-thinning low, medium and high-viscosity fiber preparations (0.15-1.05 % psyllium husk, 0.07-0.6 % guar gum, 0.15-1.20 % gum tragacanth, 0.1-0.8 % gum karaya, 0.15-1.05 % high-viscosity Carboxy Methyl Cellulose and 0.1-0.7 % xanthan gum) showed that the consistency coefficient (k) was a function of concentration, the relationship being exponential (R(2), 0.87-0.96; P < 0.01). The flow behaviour index (n) (except for gum karaya and CMC) was exponentially related to concentration (R(2), 0.61-0.98). The relationship between k and sensory viscosity rating (SVR) was essentially linear in nearly all cases. The SVR could be predicted from the consistency coefficient using the regression equations developed. Also, the relationship of k with fiber concentration would make it possible to identify the concentration of a particular gum required to have desired consistency in terms of SVR.

  20. Quantifying the yellow signal driver behavior based on naturalistic data from digital enforcement cameras.

    PubMed

    Bar-Gera, H; Musicant, O; Schechtman, E; Ze'evi, T

    2016-11-01

    The yellow signal driver behavior, reflecting the dilemma zone behavior, is analyzed using naturalistic data from digital enforcement cameras. The key variable in the analysis is the entrance time after the yellow onset, and its distribution. This distribution can assist in determining two critical outcomes: the safety outcome related to red-light-running angle accidents, and the efficiency outcome. The connection to other approaches for evaluating the yellow signal driver behavior is also discussed. The dataset was obtained from 37 digital enforcement cameras at non-urban signalized intersections in Israel, over a period of nearly two years. The data contain more than 200 million vehicle entrances, of which 2.3% (∼5million vehicles) entered the intersection during the yellow phase. In all non-urban signalized intersections in Israel the green phase ends with 3s of flashing green, followed by 3s of yellow. In most non-urban signalized roads in Israel the posted speed limit is 90km/h. Our analysis focuses on crossings during the yellow phase and the first 1.5s of the red phase. The analysis method consists of two stages. In the first stage we tested whether the frequency of crossings is constant at the beginning of the yellow phase. We found that the pattern was stable (i.e., the frequencies were constant) at 18 intersections, nearly stable at 13 intersections and unstable at 6 intersections. In addition to the 6 intersections with unstable patterns, two other outlying intersections were excluded from subsequent analysis. Logistic regression models were fitted for each of the remaining 29 intersection. We examined both standard (exponential) logistic regression and four parameters logistic regression. The results show a clear advantage for the former. The estimated parameters show that the time when the frequency of crossing reduces to half ranges from1.7 to 2.3s after yellow onset. The duration of the reduction of the relative frequency from 0.9 to 0.1 ranged from 1.9 to 2.9s. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  2. A Spectral Lyapunov Function for Exponentially Stable LTV Systems

    NASA Technical Reports Server (NTRS)

    Zhu, J. Jim; Liu, Yong; Hang, Rui

    2010-01-01

    This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.

  3. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  4. Brain natriuretic peptide (BNP) may play a major role in risk stratification based on cerebral oxygen saturation by near-infrared spectroscopy in patients undergoing major cardiovascular surgery

    PubMed Central

    Hayashida, Masakazu; Matsushita, Satoshi; Yamamoto, Makiko; Nakamura, Atsushi; Amano, Atsushi

    2017-01-01

    Purpose A previous study reported that low baseline cerebral oxygen saturation (ScO2) (≤50%) measured with near-infrared spectroscopy was predictive of poor clinical outcomes after cardiac surgery. However, such findings have not been reconfirmed by others. We conducted the current study to evaluate whether the previous findings would be reproducible, and to explore mechanisms underlying the ScO2-based outcome prediction. Methods We retrospectively investigated 573 consecutive patients, aged 20 to 91 (mean ± standard deviation, 67.1 ± 12.8) years, who underwent major cardiovascular surgery. Preanesthetic baseline ScO2, lowest intraoperative ScO2, various clinical variables, and hospital mortality were examined. Results Bivariate regression analyses revealed that baseline ScO2 correlated significantly with plasma brain natriuretic peptide concentration (BNP), hemoglobin concentration (Hgb), estimated glomerular filtration rate (eGFR), and left ventricular ejection fraction (LVEF) (p < 0.0001 for each). Baseline ScO2 correlated with BNP in an exponential manner, and BNP was the most significant factor influencing ScO2. Logistic regression analyses revealed that baseline and lowest intraoperative ScO2 values, but not relative ScO2 decrements, were significantly associated with hospital mortality (p < 0.05), independent of the EuroSCORE (p < 0.01). Receiver operating curve analysis of ScO2 values and hospital mortality revealed an area under the curve (AUC) of 0.715 (p < 0.01) and a cutoff value of ≤50.5% for the baseline and ScO2, and an AUC of 0.718 (p < 0.05) and a cutoff value of ≤35% for the lowest intraoperative ScO2. Low baseline ScO2 (≤50%) was associated with increases in intubation time, intensive care unit stay, hospital stay, and hospital mortality. Conclusion Baseline ScO2 was reflective of severity of systemic comorbidities and was predictive of clinical outcomes after major cardiovascular surgery. ScO2 correlated most significantly with BNP in an exponential manner, suggesting that BNP plays a major role in the ScO2-based outcome prediction. PMID:28704502

  5. Quantifying mineral abundances of complex mixtures by coupling spectral deconvolution of SWIR spectra (2.1-2.4 μm) and regression tree analysis

    USGS Publications Warehouse

    Mulder, V.L.; Plotze, Michael; de Bruin, Sytze; Schaepman, Michael E.; Mavris, C.; Kokaly, Raymond F.; Egli, Markus

    2013-01-01

    This paper presents a methodology for assessing mineral abundances of mixtures having more than two constituents using absorption features in the 2.1-2.4 μm wavelength region. In the first step, the absorption behaviour of mineral mixtures is parameterised by exponential Gaussian optimisation. Next, mineral abundances are predicted by regression tree analysis using these parameters as inputs. The approach is demonstrated on a range of prepared samples with known abundances of kaolinite, dioctahedral mica, smectite, calcite and quartz and on a set of field samples from Morocco. The latter contained varying quantities of other minerals, some of which did not have diagnostic absorption features in the 2.1-2.4 μm region. Cross validation showed that the prepared samples of kaolinite, dioctahedral mica, smectite and calcite were predicted with a root mean square error (RMSE) less than 9 wt.%. For the field samples, the RMSE was less than 8 wt.% for calcite, dioctahedral mica and kaolinite abundances. Smectite could not be well predicted, which was attributed to spectral variation of the cations within the dioctahedral layered smectites. Substitution of part of the quartz by chlorite at the prediction phase hardly affected the accuracy of the predicted mineral content; this suggests that the method is robust in handling the omission of minerals during the training phase. The degree of expression of absorption components was different between the field sample and the laboratory mixtures. This demonstrates that the method should be calibrated and trained on local samples. Our method allows the simultaneous quantification of more than two minerals within a complex mixture and thereby enhances the perspectives of spectral analysis for mineral abundances.

  6. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  7. Rotating flow of a nanofluid due to an exponentially stretching surface with suction

    NASA Astrophysics Data System (ADS)

    Salleh, Siti Nur Alwani; Bachok, Norfifah; Arifin, Norihan Md

    2017-08-01

    An analysis of the rotating nanofluid flow past an exponentially stretched surface with the presence of suction is studied in this work. Three different types of nanoparticles, namely, copper, titania and alumina are considered. The system of ordinary differential equations is computed numerically using a shooting method in Maple software after being transformed from the partial differential equations. This transformation has considered the similarity transformations in exponential form. The physical effect of the rotation, suction and nanoparticle volume fraction parameters on the rotating flow and heat transfer phenomena is investigated and has been described in detail through graphs. The dual solutions are found to appear when the governing parameters reach a certain range.

  8. On stable exponential cosmological solutions with non-static volume factor in the Einstein-Gauss-Bonnet model

    NASA Astrophysics Data System (ADS)

    Ivashchuk, V. D.; Ernazarov, K. K.

    2017-01-01

    A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.

  9. Nonlinear stability of the 1D Boltzmann equation in a periodic box

    NASA Astrophysics Data System (ADS)

    Wu, Kung-Chien

    2018-05-01

    We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.

  10. Non-Gaussian analysis of diffusion weighted imaging in head and neck at 3T: a pilot study in patients with nasopharyngeal carcinoma.

    PubMed

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D

    2014-01-01

    To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.

  11. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  12. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    NASA Astrophysics Data System (ADS)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (<0.01 mm) polynomial regression, to achieve a highest confidence value R2 at the depths of 15-20 cm 0.72 (Analysette 22 MicroTec plus) and 0.95 (Mastersizer 2000), from a depth of 40-45 cm 0.90 (Analysette 22 MicroTec plus) and 0.96 (Mastersizer 2000). Since the percentage representation of clayey particles (2nd fraction according to the methodology of Complex Soil Survey done in Slovakia) in soil is the determinant for soil type specification, we recommend using the derived relationships in soil science when the soil texture analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the needs of soil science research and attempt to replace the standard pipette method with more progressive laser diffraction method.

  13. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  14. Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models

    NASA Astrophysics Data System (ADS)

    Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei

    2016-06-01

    It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.

  15. The temporal analysis of yeast exponential phase using shotgun proteomics as a fermentation monitoring technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Eric L.; Orsat, Valerie; Shah, Manesh B

    2012-01-01

    System biology and bioprocess technology can be better understood using shotgun proteomics as a monitoring system during the fermentation. We demonstrated a shotgun proteomic method to monitor the temporal yeast proteome in early, middle and late exponential phases. Our study identified a total of 1389 proteins combining all 2D-LC-MS/MS runs. The temporal Saccharomyces cerevisiae proteome was enriched with proteolysis, radical detoxification, translation, one-carbon metabolism, glycolysis and TCA cycle. Heat shock proteins and proteins associated with oxidative stress response were found throughout the exponential phase. The most abundant proteins observed were translation elongation factors, ribosomal proteins, chaperones and glycolytic enzymes. Themore » high abundance of the H-protein of the glycine decarboxylase complex (Gcv3p) indicated the availability of glycine in the environment. We observed differentially expressed proteins and the induced proteins at mid-exponential phase were involved in ribosome biogenesis, mitochondria DNA binding/replication and transcriptional activator. Induction of tryptophan synthase (Trp5p) indicated the abundance of tryptophan during the fermentation. As fermentation progressed toward late exponential phase, a decrease in cell proliferation was implied from the repression of ribosomal proteins, transcription coactivators, methionine aminopeptidase and translation-associated proteins.« less

  16. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  17. TEMPERATURE-DEPENDENT VISCOELASTIC PROPERTIES OF THE HUMAN SUPRASPINATUS TENDON

    PubMed Central

    Huang, Chun-Yuh; Wang, Vincent M.; Flatow, Evan L.; Mow, Van C.

    2009-01-01

    Temperature effects on the viscoelastic properties of the human supraspinatus tendon were investigated using static stress-relaxation experiments and Quasi-Linear Viscoelastic (QLV) theory. Twelve supraspinatus tendons were randomly assigned to one of two test groups for tensile testing using the following sequence of temperatures: (1) 37°C, 27°C, and 17°C (Group I, n=6), or (2) 42°C, 32°C, and 22°C (Group II, n=6). QLV parameter C was found to increase at elevated temperatures, suggesting greater viscous mechanical behavior at higher temperatures. Elastic parameters A and B showed no significant difference among the six temperatures studied, implying that the viscoelastic stress response of the supraspinatus tendon is not sensitive to temperature over shorter testing durations. Using regression analysis, an exponential relationship between parameter C and test temperature was implemented into QLV theory to model temperature-dependent viscoelastic behavior. This modified approach facilitates the theoretical determination of the viscoelastic behavior of tendons at arbitrary temperatures. PMID:19159888

  18. A quantitative description of normal AV nodal conduction curve in man.

    PubMed

    Teague, S; Collins, S; Wu, D; Denes, P; Rosen, K; Arzbaecher, R

    1976-01-01

    The AV nodal conduction curve generated by the atrial extrastimulus technique has been described only qualitatively in man, making clinical comparison of known normal curves with those of suspected AV nodal dysfunction difficult. Also, the effects of physiological and pharmacological interventions have not been quantifiable. In 50 patients with normal AV conduction as defined by normal AH (less than 130 ms), normal AV nodal effective and functional refractory periods (less than 380 and less than 500 ms), and absence of demonstrable dual AV nodal pathways, we found that conduction curves (at sinus rhythm or longest paced cycle length) can be described by an exponential equation of the form delta = Ae-Bx. In this equation, delta is the increase in AV nodal conduction time of an extrastimulus compared to that of a regular beat and x is extrastimulus interval. The natural logarithm of this equation is linear in the semilogarithmic plane, thus permitting the constants A and B to be easily determined by a least-squares regression analysis with a hand calculator.

  19. Risk of Falls in Parkinson's Disease: A Cross-Sectional Study of 160 Patients

    PubMed Central

    Contreras, Ana; Grandas, Francisco

    2012-01-01

    Falls are a major source of disability in Parkinson's disease. Risk factors for falling in Parkinson's disease remain unclear. To determine the relevant risk factors for falling in Parkinson's disease, we screened 160 consecutive patients with Parkinson's disease for falls and assessed 40 variables. A comparison between fallers and nonfallers was performed using statistical univariate analyses, followed by bivariate and multivariate logistic regression, receiver-operating characteristics analysis, and Kaplan-Meier curves. 38.8% of patients experienced falls since the onset of Parkinson's disease (recurrent in 67%). Tinetti Balance score and Hoehn and Yahr staging were the best independent variables associated with falls. The Tinetti Balance test predicted falls with 71% sensitivity and 79% specificity and Hoehn and Yahr staging with 77% sensitivity and 71% specificity. The risk of falls increased exponentially with age, especially from 70 years onward. Patients aged >70 years at the onset of Parkinson's disease experienced falls significantly earlier than younger patients. PMID:22292126

  20. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  1. MRI quantification of diffusion and perfusion in bone marrow by intravoxel incoherent motion (IVIM) and non-negative least square (NNLS) analysis.

    PubMed

    Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G

    2014-11-01

    To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. The combined effect of age and basal follicle-stimulating hormone on the cost of a live birth at assisted reproductive technology.

    PubMed

    Henne, Melinda B; Stegmann, Barbara J; Neithardt, Adrienne B; Catherino, William H; Armstrong, Alicia Y; Kao, Tzu-Cheg; Segars, James H

    2008-01-01

    To predict the cost of a delivery following assisted reproductive technologies (ART). Cost analysis based on retrospective chart analysis. University-based ART program. Women aged >or=26 and

  3. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  4. Functional interaction-based nonlinear models with application to multiplatform genomics data.

    PubMed

    Davenport, Clemontina A; Maity, Arnab; Baladandayuthapani, Veerabhadran

    2018-05-07

    Functional regression allows for a scalar response to be dependent on a functional predictor; however, not much work has been done when a scalar exposure that interacts with the functional covariate is introduced. In this paper, we present 2 functional regression models that account for this interaction and propose 2 novel estimation procedures for the parameters in these models. These estimation methods allow for a noisy and/or sparsely observed functional covariate and are easily extended to generalized exponential family responses. We compute standard errors of our estimators, which allows for further statistical inference and hypothesis testing. We compare the performance of the proposed estimators to each other and to one found in the literature via simulation and demonstrate our methods using a real data example. Copyright © 2018 John Wiley & Sons, Ltd.

  5. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. VO2 Off Transient Kinetics in Extreme Intensity Swimming.

    PubMed

    Sousa, Ana; Figueiredo, Pedro; Keskinen, Kari L; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, João P; Fernandes, Ricardo J

    2011-01-01

    Inconsistencies about dynamic asymmetry between the on- and off- transient responses in oxygen uptake are found in the literature. Therefore, the purpose of this study was to characterize the oxygen uptake off-transient kinetics during a maximal 200-m front crawl effort, as examining the degree to which the on/off regularity of the oxygen uptake kinetics response was preserved. Eight high level male swimmers performed a 200-m front crawl at maximal speed during which oxygen uptake was directly measured through breath-by-breath oxymetry (averaged every 5 s). This apparatus was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. The on- and off-transient phases were symmetrical in shape (mirror image) once they were adequately fitted by a single-exponential regression models, and no slow component for the oxygen uptake response was developed. Mean (± SD) peak oxygen uptake was 69.0 (± 6.3) mL·kg(-1)·min(-1), significantly correlated with time constant of the off- transient period (r = 0.76, p < 0.05) but not with any of the other oxygen off-transient kinetic parameters studied. A direct relationship between time constant of the off-transient period and mean swimming speed of the 200-m (r = 0.77, p < 0.05), and with the amplitude of the fast component of the effort period (r = 0.72, p < 0.05) were observed. The mean amplitude and time constant of the off-transient period values were significantly greater than the respective on- transient. In conclusion, although an asymmetry between the on- and off kinetic parameters was verified, both the 200-m effort and the respectively recovery period were better characterized by a single exponential regression model. Key pointsThe VO2 slow component was not observed in the recovery period of swimming extreme efforts;The on and off transient periods were better fitted by a single exponential function, and so, these effort and recovery periods of swimming extreme efforts are symmetrical;The rate of VO2 decline during the recovery period may be due to not only the magnitude of oxygen debt but also the VO2peak obtained during the effort period.

  7. Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.

    2007-01-01

    We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.

  8. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  9. On the non-exponentiality of the dielectric Debye-like relaxation of monoalcohols

    NASA Astrophysics Data System (ADS)

    Arrese-Igor, S.; Alegría, A.; Colmenero, J.

    2017-03-01

    We have investigated the Debye-like relaxation in a series of monoalcohols (MAs) by broadband dielectric spectroscopy and thermally stimulated depolarization current techniques in order to get further insight on the time dispersion of this intriguing relaxation. Results indicate that the Debye-like relaxation of MAs is not always of exponential type and conforms well to a dispersion of Cole-Davidson type. Apart from the already reported non-exponentiality of the Debye-like relaxation in 2-hexyl-1-decanol and 2-butyl-1-octanol, a detailed analysis of the dielectric permittivity of 5-methyl-3-heptanol shows that this MA also presents some extent of dispersion on its Debye-like relaxation which strongly depends on the temperature. Results suggest that the non-exponential character of the Debye-like relaxation might be a general characteristic in the case of not so intense Debye-like relaxations relative to the α relaxation. Finally, we briefly discuss on the T-dependence and possible origin for the observed dispersion.

  10. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 3; Constant Stress and Cyclic Stress Experiments

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  11. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  12. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-01

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  13. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-14

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programmed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  14. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Higher Crash and Near-Crash Rates in Teenaged Drivers With Lower Cortisol Response

    PubMed Central

    Ouimet, Marie Claude; Brown, Thomas G.; Guo, Feng; Klauer, Sheila G.; Simons-Morton, Bruce G.; Fang, Youjia; Lee, Suzanne E.; Gianoulakis, Christina; Dingus, Thomas A.

    2014-01-01

    IMPORTANCE Road traffic crashes are one of the leading causes of injury and death among teenagers worldwide. Better understanding of the individual pathways to driving risk may lead to better-targeted intervention in this vulnerable group. OBJECTIVE To examine the relationship between cortisol, a neurobiological marker of stress regulation linked to risky behavior, and driving risk. DESIGN, SETTING, AND PARTICIPANTS The Naturalistic Teenage Driving Study was designed to continuously monitor the driving behavior of teenagers by instrumenting vehicles with kinematic sensors, cameras, and a global positioning system. During 2006–2008, a community sample of 42 newly licensed 16-year-old volunteer participants in the United States was recruited and driving behavior monitored. It was hypothesized in teenagers that higher cortisol response to stress is associated with (1) lower crash and near-crash (CNC) rates during their first 18 months of licensure and (2) faster reduction in CNC rates over time. MAIN OUTCOMES AND MEASURES Participants’ cortisol response during a stress-inducing task was assessed at baseline, followed by measurement of their involvement in CNCs and driving exposure during their first 18 months of licensure. Mixed-effect Poisson longitudinal regression models were used to examine the association between baseline cortisol response and CNC rates during the follow-up period. RESULTS Participants with a higher baseline cortisol response had lower CNC rates during the follow-up period (exponential of the regression coefficient, 0.93; 95%CI, 0.88–0.98) and faster decrease in CNC rates over time (exponential of the regression coefficient, 0.98; 95%, CI, 0.96–0.99). CONCLUSIONS AND RELEVANCE Cortisol is a neurobiological marker associated with teenaged-driving risk. As in other problem-behavior fields, identification of an objective marker of teenaged-driving risk promises the development of more personalized intervention approaches. PMID:24710522

  16. Data preprocessing method for liquid chromatography-mass spectrometry based metabolomics.

    PubMed

    Wei, Xiaoli; Shi, Xue; Kim, Seongho; Zhang, Li; Patrick, Jeffrey S; Binkley, Joe; McClain, Craig; Zhang, Xiang

    2012-09-18

    A set of data preprocessing algorithms for peak detection and peak list alignment are reported for analysis of liquid chromatography-mass spectrometry (LC-MS)-based metabolomics data. For spectrum deconvolution, peak picking is achieved at the selected ion chromatogram (XIC) level. To estimate and remove the noise in XICs, each XIC is first segmented into several peak groups based on the continuity of scan number, and the noise level is estimated by all the XIC signals, except the regions potentially with presence of metabolite ion peaks. After removing noise, the peaks of molecular ions are detected using both the first and the second derivatives, followed by an efficient exponentially modified Gaussian-based peak deconvolution method for peak fitting. A two-stage alignment algorithm is also developed, where the retention times of all peaks are first transferred into the z-score domain and the peaks are aligned based on the measure of their mixture scores after retention time correction using a partial linear regression. Analysis of a set of spike-in LC-MS data from three groups of samples containing 16 metabolite standards mixed with metabolite extract from mouse livers demonstrates that the developed data preprocessing method performs better than two of the existing popular data analysis packages, MZmine2.6 and XCMS(2), for peak picking, peak list alignment, and quantification.

  17. A Data Pre-processing Method for Liquid Chromatography Mass Spectrometry-based Metabolomics

    PubMed Central

    Wei, Xiaoli; Shi, Xue; Kim, Seongho; Zhang, Li; Patrick, Jeffrey S.; Binkley, Joe; McClain, Craig; Zhang, Xiang

    2012-01-01

    A set of data pre-processing algorithms for peak detection and peak list alignment are reported for analysis of LC-MS based metabolomics data. For spectrum deconvolution, peak picking is achieved at selected ion chromatogram (XIC) level. To estimate and remove the noise in XICs, each XIC is first segmented into several peak groups based on the continuity of scan number, and the noise level is estimated by all the XIC signals, except the regions potentially with presence of metabolite ion peaks. After removing noise, the peaks of molecular ions are detected using both the first and the second derivatives, followed by an efficient exponentially modified Gaussian-based peak deconvolution method for peak fitting. A two-stage alignment algorithm is also developed, where the retention times of all peaks are first transferred into z-score domain and the peaks are aligned based on the measure of their mixture scores after retention time correction using a partial linear regression. Analysis of a set of spike-in LC-MS data from three groups of samples containing 16 metabolite standards mixed with metabolite extract from mouse livers, demonstrates that the developed data pre-processing methods performs better than two of the existing popular data analysis packages, MZmine2.6 and XCMS2, for peak picking, peak list alignment and quantification. PMID:22931487

  18. Examining spectral properties of Landsat 8 OLI for predicting above-ground carbon of Labanan Forest, Berau

    NASA Astrophysics Data System (ADS)

    Suhardiman, A.; Tampubolon, B. A.; Sumaryono, M.

    2018-04-01

    Many studies revealed significant correlation between satellite image properties and forest data attributes such as stand volume, biomass or carbon stock. However, further study is still relevant due to advancement of remote sensing technology as well as improvement on methods of data analysis. In this study, the properties of three vegetation indices derived from Landsat 8 OLI were tested upon above-ground carbon stock data from 50 circular sample plots (30-meter radius) from ground survey in PT. Inhutani I forest concession in Labanan, Berau, East Kalimantan. Correlation analysis using Pearson method exhibited a promising results when the coefficient of correlation (r-value) was higher than 0.5. Further regression analysis was carried out to develop mathematical model describing the correlation between sample plots data and vegetation index image using various mathematical models.Power and exponential model were demonstrated a good result for all vegetation indices. In order to choose the most adequate mathematical model for predicting Above-ground Carbon (AGC), the Bayesian Information Criterion (BIC) was applied. The lowest BIC value (i.e. -376.41) shown by Transformed Vegetation Index (TVI) indicates this formula, AGC = 9.608*TVI21.54, is the best predictor of AGC of study area.

  19. [Responses of Picea likiangensis radial growth to climate change in the Small Zhongdian area of Yunnan Province, Southwest China].

    PubMed

    Zhao, Zhi-Jiang; Tan, Liu-Yi; Kang, Dong-Wei; Liu, Qi-Jing; Li, Jun-Qing

    2012-03-01

    Picea likiangensis (Franch. ) Pritz. primary forest is one of the dominant forest types in the Small Zhongdian area in Shangri-La County of Yunnan Province. In this paper, the responses of P. likiangensis tree-ring width to climate change were analyzed by dendrochronological methods, and the dendrochronology was built by using relatively conservative detrending negative exponential curves or linear regression. Correlation analysis and response function analysis were applied to explore the relationships between the residual chronology series (RES) and climatic factors at different time scales, and pointer year analysis was used to explain the reasons of producing narrow and wide rings. In the study area, the radial growth of P. likiangensis and the increasing air temperature from 1990 to 2008 had definite 'abruption'. The temperature and precipitation in previous year growth season were the main factors limiting the present year radial growth, and especially, the temperature in previous July played a negative feedback role in the radial growth, while the sufficient precipitation in previous July promoted the radial growth. The differences in the temperature variation and precipitation variation in previous year were the main reasons for the formation of narrow and wide rings. P. likiangensis radial growth was not sensitive to the variation of PDSI.

  20. Conservation of water for washing beef heads at harvest.

    PubMed

    DeOtte, R E; Spivey, K S; Galloway, H O; Lawrence, T E

    2010-03-01

    The objective of this research was to develop methods to conserve water necessary to cleanse beef heads prior to USDA-FSIS inspection. This was to be accomplished by establishing a baseline for the minimum amount of water necessary to adequately wash a head and application of image analysis to provide an objective measure of head cleaning. Twenty-one beef heads were manually washed during the harvest process. An average 18.75 L (2.49 SD) and a maximum of 23.88 L were required to cleanse the heads to USDA-FSIS standards. Digital images were captured before and after manual washing then evaluated for percentage red saturation using commercially available image analysis software. A decaying exponential curve extracted from these data indicated that as wash water increased beyond 20 L the impact on red saturation decreased. At 4 sigma from the mean of 18.75 L, red saturation is 16.0 percent, at which logistic regression analysis indicates 99.994 percent of heads would be accepted for inspection, or less than 1 head in 15,000 would be rejected. Reducing to 3 sigma would increase red saturation to 27.6 percent, for which 99.730 percent of heads likely would be accepted (less than 1 in 370 would be rejected). Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  2. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  3. Role of exponential apparent diffusion coefficient in characterizing breast lesions by 3.0 Tesla diffusion-weighted magnetic resonance imaging

    PubMed Central

    Kothari, Shweta; Singh, Archana; Das, Utpalendu; Sarkar, Diptendra K; Datta, Chhanda; Hazra, Avijit

    2017-01-01

    Objective: To evaluate the role of exponential apparent diffusion coefficient (ADC) as a tool for differentiating benign and malignant breast lesions. Patients and Methods: This prospective observational study included 88 breast lesions in 77 patients (between 18 and 85 years of age) who underwent 3T breast magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI) using b-values of 0 and 800 s/mm2 before biopsy. Mean exponential ADC and ADC of benign and malignant lesions obtained from DWI were compared. Receiver operating characteristics (ROC) curve analysis was undertaken to identify any cut-off for exponential ADC and ADC to predict malignancy. P value of <0.05 was considered statistically significant. Histopathology was taken as the gold standard. Results: According to histopathology, 65 lesions were malignant and 23 were benign. The mean ADC and exponential ADC values of malignant lesions were 0.9526 ± 0.203 × 10−3 mm2/s and 0.4774 ± 0.071, respectively, and for benign lesions were 1.48 ± 0.4903 × 10−3 mm2/s and 0.317 ± 0.1152, respectively. For both the parameters, differences were highly significant (P < 0.001). Cut-off value of ≤0.0011 mm2/s (P < 0.0001) for ADC provided 92.3% sensitivity and 73.9% specificity, whereas with an exponential ADC cut-off value of >0.4 (P < 0.0001) for malignant lesions, 93.9% sensitivity and 82.6% specificity was obtained. The performance of ADC and exponential ADC in distinguishing benign and malignant breast lesions based on respective cut-offs was comparable (P = 0.109). Conclusion: Exponential ADC can be used as a quantitative adjunct tool for characterizing breast lesions with comparable sensitivity and specificity as that of ADC. PMID:28744085

  4. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  5. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  6. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  7. Proteomic analysis of growth phase-dependent expression of Legionella pneumophila proteins which involves regulation of bacterial virulence traits.

    PubMed

    Hayashi, Tsuyoshi; Nakamichi, Masahiro; Naitou, Hirotaka; Ohashi, Norio; Imai, Yasuyuki; Miyake, Masaki

    2010-07-22

    Legionella pneumophila, which is a causative pathogen of Legionnaires' disease, expresses its virulent traits in response to growth conditions. In particular, it is known to become virulent at a post-exponential phase in vitro culture. In this study, we performed a proteomic analysis of differences in expression between the exponential phase and post-exponential phase to identify candidates associated with L. pneumophila virulence using 2-Dimentional Fluorescence Difference Gel Electrophoresis (2D-DIGE) combined with Matrix-Assisted Laser Desorption/Ionization-Mass Spectrometry (MALDI-TOF-MS). Of 68 identified proteins that significantly differed in expression between the two growth phases, 64 were up-regulated at a post-exponential phase. The up-regulated proteins included enzymes related to glycolysis, ketone body biogenesis and poly-3-hydroxybutyrate (PHB) biogenesis, suggesting that L. pneumophila may utilize sugars and lipids as energy sources, when amino acids become scarce. Proteins related to motility (flagella components and twitching motility-associated proteins) were also up-regulated, predicting that they enhance infectivity of the bacteria in host cells under certain conditions. Furthermore, 9 up-regulated proteins of unknown function were found. Two of them were identified as novel bacterial factors associated with hemolysis of sheep red blood cells (SRBCs). Another 2 were found to be translocated into macrophages via the Icm/Dot type IV secretion apparatus as effector candidates in a reporter assay with Bordetella pertussis adenylate cyclase. The study will be helpful for virulent analysis of L. pneumophila from the viewpoint of physiological or metabolic modulation dependent on growth phase.

  8. Exponential Decay Nonlinear Regression Analysis of Patient Survival Curves: Preliminary Assessment in Non-Small Cell Lung Cancer

    PubMed Central

    Stewart, David J.; Behrens, Carmen; Roth, Jack; Wistuba, Ignacio I.

    2010-01-01

    Background For processes that follow first order kinetics, exponential decay nonlinear regression analysis (EDNRA) may delineate curve characteristics and suggest processes affecting curve shape. We conducted a preliminary feasibility assessment of EDNRA of patient survival curves. Methods EDNRA was performed on Kaplan-Meier overall survival (OS) and time-to-relapse (TTR) curves for 323 patients with resected NSCLC and on OS and progression-free survival (PFS) curves from selected publications. Results and Conclusions In our resected patients, TTR curves were triphasic with a “cured” fraction of 60.7% (half-life [t1/2] >100,000 months), a rapidly-relapsing group (7.4%, t1/2=5.9 months) and a slowly-relapsing group (31.9%, t1/2=23.6 months). OS was uniphasic (t1/2=74.3 months), suggesting an impact of co-morbidities; hence, tumor molecular characteristics would more likely predict TTR than OS. Of 172 published curves analyzed, 72 (42%) were uniphasic, 92 (53%) were biphasic, 8 (5%) were triphasic. With first-line chemotherapy in advanced NSCLC, 87.5% of curves from 2-3 drug regimens were uniphasic vs only 20% of those with best supportive care or 1 drug (p<0.001). 54% of curves from 2-3 drug regimens had convex rapid-decay phases vs 0% with fewer agents (p<0.001). Curve convexities suggest that discontinuing chemotherapy after 3-6 cycles “synchronizes” patient progression and death. With postoperative adjuvant chemotherapy, the PFS rapid-decay phase accounted for a smaller proportion of the population than in controls (p=0.02) with no significant difference in rapid-decay t1/2, suggesting adjuvant chemotherapy may move a subpopulation of patients with sensitive tumors from the relapsing group to the cured group, with minimal impact on time to relapse for a larger group of patients with resistant tumors. In untreated patients, the proportion of patients in the rapid-decay phase increased (p=0.04) while rapid-decay t1/2 decreased (p=0.0004) with increasing stage, suggesting that higher stage may be associated with tumor cells that both grow more rapidly and have a higher probability of surviving metastatic processes than in early stage tumors. This preliminary assessment of EDNRA suggests that it may be worth exploring this approach further using more sophisticated, statistically rigorous nonlinear modelling approaches. Using such approaches to supplement standard survival analyses could suggest or support specific testable hypotheses. PMID:20627364

  9. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    NASA Astrophysics Data System (ADS)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  10. Frequency Selection for Multi-frequency Acoustic Measurement of Suspended Sediment

    NASA Astrophysics Data System (ADS)

    Chen, X.; HO, H.; Fu, X.

    2017-12-01

    Multi-frequency acoustic measurement of suspended sediment has found successful applications in marine and fluvial environments. Difficult challenges remain in regard to improving its effectiveness and efficiency when applied to high concentrations and wide size distributions in rivers. We performed a multi-frequency acoustic scattering experiment in a cylindrical tank with a suspension of natural sands. The sands range from 50 to 600 μm in diameter with a lognormal size distribution. The bulk concentration of suspended sediment varied from 1.0 to 12.0 g/L. We found that the commonly used linear relationship between the intensity of acoustic backscatter and suspended sediment concentration holds only at sufficiently low concentrations, for instance below 3.0 g/L. It fails at a critical value of concentration that depends on measurement frequency and the distance between the transducer and the target point. Instead, an exponential relationship was found to work satisfactorily throughout the entire range of concentration. The coefficient and exponent of the exponential function changed, however, with the measuring frequency and distance. Considering the increased complexity of inverting the concentration values when an exponential relationship prevails, we further analyzed the relationship between measurement error and measuring frequency. It was also found that the inversion error may be effectively controlled within 5% if the frequency is properly set. Compared with concentration, grain size was found to heavily affect the selection of optimum frequency. A regression relationship for optimum frequency versus grain size was developed based on the experimental results.

  11. Non-Gaussian Analysis of Diffusion Weighted Imaging in Head and Neck at 3T: A Pilot Study in Patients with Nasopharyngeal Carcinoma

    PubMed Central

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.

    2014-01-01

    Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318

  12. Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets

    PubMed Central

    Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626

  13. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  14. Efficiency Analysis of Waveform Shape for Electrical Excitation of Nerve Fibers

    PubMed Central

    Wongsarnpigoon, Amorn; Woock, John P.; Grill, Warren M.

    2011-01-01

    Stimulation efficiency is an important consideration in the stimulation parameters of implantable neural stimulators. The objective of this study was to analyze the effects of waveform shape and duration on the charge, power, and energy efficiency of neural stimulation. Using a population model of mammalian axons and in vivo experiments on cat sciatic nerve, we analyzed the stimulation efficiency of four waveform shapes: square, rising exponential, decaying exponential, and rising ramp. No waveform was simultaneously energy-, charge-, and power-optimal, and differences in efficiency among waveform shapes varied with pulse width (PW) For short PWs (≤ 0.1 ms), square waveforms were no less energy-efficient than exponential waveforms, and the most charge-efficient shape was the ramp. For long PWs (≥0.5 ms), the square was the least energy-efficient and charge-efficient shape, but across most PWs, the square was the most power-efficient shape. Rising exponentials provided no practical gains in efficiency over the other shapes, and our results refute previous claims that the rising exponential is the energy-optimal shape. An improved understanding of how stimulation parameters affect stimulation efficiency will help improve the design and programming of implantable stimulators to minimize tissue damage and extend battery life. PMID:20388602

  15. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  16. Modeling of thermal degradation kinetics of the C-glucosyl xanthone mangiferin in an aqueous model solution as a function of pH and temperature and protective effect of honeybush extract matrix.

    PubMed

    Beelders, Theresa; de Beer, Dalene; Kidd, Martin; Joubert, Elizabeth

    2018-01-01

    Mangiferin, a C-glucosyl xanthone, abundant in mango and honeybush, is increasingly targeted for its bioactive properties and thus to enhance functional properties of food. The thermal degradation kinetics of mangiferin at pH3, 4, 5, 6 and 7 were each modeled at five temperatures ranging between 60 and 140°C. First-order reaction models were fitted to the data using non-linear regression to determine the reaction rate constant at each pH-temperature combination. The reaction rate constant increased with increasing temperature and pH. Comparison of the reaction rate constants at 100°C revealed an exponential relationship between the reaction rate constant and pH. The data for each pH were also modeled with the Arrhenius equation using non-linear and linear regression to determine the activation energy and pre-exponential factor. Activation energies decreased slightly with increasing pH. Finally, a multi-linear model taking into account both temperature and pH was developed for mangiferin degradation. Sterilization (121°C for 4min) of honeybush extracts dissolved at pH4, 5 and 7 did not cause noticeable degradation of mangiferin, although the multi-linear model predicted 34% degradation at pH7. The extract matrix is postulated to exert a protective effect as changes in potential precursor content could not fully explain the stability of mangiferin. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The association of adherence to osteoporosis therapies with fracture, all-cause medical costs, and all-cause hospitalizations: a retrospective claims analysis of female health plan enrollees with osteoporosis.

    PubMed

    Halpern, Rachel; Becker, Laura; Iqbal, Sheikh Usman; Kazis, Lewis E; Macarios, David; Badamgarav, Enkhjargal

    2011-01-01

    Osteoporosis affects approximately 10 million people in the United States and is associated with increased fracture risk and fracture-related costs. Poor adherence to osteoporosis medications is associated with higher general burden of illness compared with optimal adherence. To examine the associations of adherence to osteoporosis therapies with (a) occurrence of closed fracture, (b) all-cause medical costs, and (c) all-cause hospitalizations. This retrospective analysis of administrative claims data examined women with osteoporosis initiating therapy with alendronate, risedronate, ibandronate, or raloxifene from July 1, 2002, to March 10, 2006. Data were from a large, geographically diverse U.S. health plan that covered about 12.6 million females during the identification period. Commercially insured and Medicare Advantage plan enrollees were observed for 1 year before (baseline period) and 540 days after therapy initiation (follow-up period). Outcomes included closed fractures, all-cause medical costs, and all-cause hospitalizations; all outcomes were measured starting 180 days after therapy initiation through follow-up. All subjects had at least 2 pharmacy claims for any of the targeted osteoporosis medications. Adherence was measured with a medication possession ratio (MPR) and accounted for all osteoporosis treatment. High adherence was MPR of at least 0.80; low adherence was MPR less than 0.50. Covariates included baseline fracture, "early" fracture (in the first 180 days of follow-up), baseline corticosteroid or thyroid hormone use, health status indicators, and demographic characteristics. Outcome fractures were modeled with Cox survival regression with time-dependent cumulative MPR. All-cause medical costs and all-cause hospitalizations were modeled, respectively, with generalized linear model regression (gamma distribution, log link) and negative binomial regression. The sample comprised 21,655 patients--16,295 (75.2%) commercial and 5,360 (24.8%) Medicare Advantage. During the entire follow-up period, 5,406 (33.2%) and 2,253 (42.0%) of commercial and Medicare Advantage patients, respectively, had low adherence. Adherence tended to decrease over the follow-up period. The Cox regression showed that commercial plan patients with low versus high adherence had 37% higher risk of fracture (hazard ratio = 1.37, 95% CI = 1.12-1.68). Adherence was not significantly associated with fracture in the Medicare Advantage cohort. Commercial and Medicare Advantage patients with low versus high adherence had 12% (exponentiated coefficient = 1.12, 95% CI = 1.02-1.24) and 18% (exponentiated coefficient = 1.18, 95% CI = 1.04-1.35) higher all-cause medical costs during months 7 through 18 of follow-up. Commercial and Medicare Advantage patients with low versus high adherence had 59% (incidence rate ratio [IRR] = 1.59, 95% CI = 1.38-1.83) and 34% (IRR = 1.34, 95% CI = 1.13-1.58) more all-cause hospitalizations during months 7 through 18 of follow-up, respectively. Low adherence to osteoporosis pharmacotherapy was associated with higher risk of fracture for commercially insured but not Medicare Advantage patients and with higher all-cause medical costs and more all-cause hospitalizations in both groups. These results are consistent with the literature and highlight the importance of promoting better adherence among patients with osteoporosis.

  18. Regression relation for pure quantum states and its implications for efficient computing.

    PubMed

    Elsayed, Tarek A; Fine, Boris V

    2013-02-15

    We obtain a modified version of the Onsager regression relation for the expectation values of quantum-mechanical operators in pure quantum states of isolated many-body quantum systems. We use the insights gained from this relation to show that high-temperature time correlation functions in many-body quantum systems can be controllably computed without complete diagonalization of the Hamiltonians, using instead the direct integration of the Schrödinger equation for randomly sampled pure states. This method is also applicable to quantum quenches and other situations describable by time-dependent many-body Hamiltonians. The method implies exponential reduction of the computer memory requirement in comparison with the complete diagonalization. We illustrate the method by numerically computing infinite-temperature correlation functions for translationally invariant Heisenberg chains of up to 29 spins 1/2. Thereby, we also test the spin diffusion hypothesis and find it in a satisfactory agreement with the numerical results. Both the derivation of the modified regression relation and the justification of the computational method are based on the notion of quantum typicality.

  19. R-Function Relationships for Application in the Fractional Calculus

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    2000-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, e(t), and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, el, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  20. R-function relationships for application in the fractional calculus.

    PubMed

    Lorenzo, Carl F; Hartley, Tom T

    2008-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, et, and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, et, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  1. Power Law Versus Exponential Form of Slow Crack Growth of Advanced Structural Ceramics: Dynamic Fatigue

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    The life prediction analysis based on an exponential crack velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress-rate ("dynamic fatigue") and preload testing at ambient and elevated temperatures. The data fit to the strength versus In (stress rate) relation was found to be very reasonable for most of the materials. It was also found that preloading technique was equally applicable for the case of slow crack growth (SCG) parameter n > 30. The major limitation in the exponential crack velocity formulation, however, was that an inert strength of a material must be known priori to evaluate the important SCG parameter n, a significant drawback as compared to the conventional power-law crack velocity formulation.

  2. Are infant mortality rate declines exponential? The general pattern of 20th century infant mortality rate decline

    PubMed Central

    Bishai, David; Opuni, Marjorie

    2009-01-01

    Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144

  3. Non-cladding optical fiber is available for detecting blood or liquids.

    PubMed

    Takeuchi, Akihiro; Miwa, Tomohiro; Shirataka, Masuo; Sawada, Minoru; Imaizumi, Haruo; Sugibuchi, Hiroyuki; Ikeda, Noriaki

    2010-10-01

    Serious accidents during hemodialysis such as an undetected large amount of blood loss are often caused by venous needle dislodgement. A special plastic optical fiber with a low refractive index was developed for monitoring leakage in oil pipelines and in other industrial fields. To apply optical fiber as a bleeding sensor, we studied optical effects of soaking the fiber with liquids and blood in light-loss experimental settings. The non-cladding optical fiber that was used was the fluoropolymer, PFA fiber, JUNFLON™, 1 mm in diameter and 2 m in length. Light intensity was studied with an ordinary basic circuit with a light emitting source (880 nm) and photodiode set at both terminals of the fiber under certain conditions: bending the fiber, soaking with various mediums, or fixing the fiber with surgical tape. The soaking mediums were reverse osmosis (RO) water, physiological saline, glucose, porcine plasma, and porcine blood. The light intensities regressed to a decaying exponential function with the soaked length. The light intensity was not decreased at bending from 20 to 1 cm in diameter. The more the soaked length increased in all mediums, the more the light intensity decreased exponentially. The means of five estimated exponential decay constants were 0.050±0.006 standard deviation in RO water, 0.485±0.016 in physiological saline, 0.404±0.022 in 5% glucose, 0.503±0.038 in blood (Hct 40%), and 0.573±0.067 in plasma. The light intensity decreased from 5 V to about 1.5 V above 5 cm in the soaked length in mediums except for RO water and fixing with surgical tape. We confirmed that light intensity significantly and exponentially decreased with the increased length of the soaked fiber. This phenomena could ideally, clinically be applied to a bleed sensor.

  4. Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case

    NASA Astrophysics Data System (ADS)

    Raja, R.; Marshal Anthoni, S.

    2011-02-01

    This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.

  5. Spectral Study of Measles Epidemics: The Dependence of Spectral Gradient on the Population Size of the Community

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi

    2003-02-01

    We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.

  6. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  7. Ultrascale Visualization of Climate Data

    NASA Technical Reports Server (NTRS)

    Williams, Dean N.; Bremer, Timo; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Pugmire, David R.; Smith, Brian; Steed, Chad; hide

    2013-01-01

    Fueled by exponential increases in the computational and storage capabilities of high-performance computing platforms, climate simulations are evolving toward higher numerical fidelity, complexity, volume, and dimensionality. These technological breakthroughs are coming at a time of exponential growth in climate data, with estimates of hundreds of exabytes by 2020. To meet the challenges and exploit the opportunities that such explosive growth affords, a consortium of four national laboratories, two universities, a government agency, and two private companies formed to explore the next wave in climate science. Working in close collaboration with domain experts, the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project aims to provide high-level solutions to a variety of climate data analysis and visualization problems.

  8. Gross-Pitaevski map as a chaotic dynamical system.

    PubMed

    Guarneri, Italo

    2017-03-01

    The Gross-Pitaevski map is a discrete time, split-operator version of the Gross-Pitaevski dynamics in the circle, for which exponential instability has been recently reported. Here it is studied as a classical dynamical system in its own right. A systematic analysis of Lyapunov exponents exposes strongly chaotic behavior. Exponential growth of energy is then shown to be a direct consequence of rotational invariance and for stationary solutions the full spectrum of Lyapunov exponents is analytically computed. The present analysis includes the "resonant" case, when the free rotation period is commensurate to 2π, and the map has countably many constants of the motion. Except for lowest-order resonances, this case exhibits an integrable-chaotic transition.

  9. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  10. A statistical study of decaying kink oscillations detected using SDO/AIA

    NASA Astrophysics Data System (ADS)

    Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.

    2016-01-01

    Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.

  11. Rimonabant reduces the essential value of food in the genetically obese Zucker rat: an exponential demand analysis.

    PubMed

    Rasmussen, Erin B; Reilly, William; Buckley, Jessica; Boomhower, Steven R

    2012-02-01

    Research on free-food intake suggests that cannabinoids are implicated in the regulation of feeding. Few studies, however, have characterized how environmental factors that affect food procurement interact with cannabinoid drugs that reduce food intake. Demand analysis provides a framework to understand how cannabinoid blockers, such as rimonabant, interact with effort in reducing demand for food. The present study examined the effects rimonabant had on demand for sucrose in obese Zucker rats when effort to obtain food varied and characterized the data using the exponential ("essential value") model of demand. Twenty-nine male (15 lean, 14 obese) Zucker rats lever-pressed under eight fixed ratio (FR) schedules of sucrose reinforcement, in which the number of lever-presses to gain access to a single sucrose pellet varied between 1 and 300. After behavior stabilized under each FR schedule, acute doses of rimonabant (1-10mg/kg) were administered prior to some sessions. The number of food reinforcers and responses in each condition was averaged and the exponential and linear demand equations were fit to the data. These demand equations quantify the value of a reinforcer by its sensitivity to price (FR) increases. Under vehicle conditions, obese Zucker rats consumed more sucrose pellets than leans at smaller fixed ratios; however, they were equally sensitive to price increases with both models of demand. Rimonabant dose-dependently reduced reinforcers and responses for lean and obese rats across all FR schedules. Data from the exponential analysis suggest that rimonabant dose-dependently increased elasticity, i.e., reduced the essential value of sucrose, a finding that is consistent with graphical depictions of normalized demand curves. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Multiplicative Forests for Continuous-Time Processes

    PubMed Central

    Weiss, Jeremy C.; Natarajan, Sriraam; Page, David

    2013-01-01

    Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967

  13. Multiplicative Forests for Continuous-Time Processes.

    PubMed

    Weiss, Jeremy C; Natarajan, Sriraam; Page, David

    2012-01-01

    Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.

  14. Third molar development by measurements of open apices in an Italian sample of living subjects.

    PubMed

    De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto

    2016-02-01

    The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Numerical Modeling of Earthquake-Induced Landslide Using an Improved Discontinuous Deformation Analysis Considering Dynamic Friction Degradation of Joints

    NASA Astrophysics Data System (ADS)

    Huang, Da; Song, Yixiang; Cen, Duofeng; Fu, Guoyang

    2016-12-01

    Discontinuous deformation analysis (DDA) as an efficient technique has been extensively applied in the dynamic simulation of discontinuous rock mass. In the original DDA (ODDA), the Mohr-Coulomb failure criterion is employed as the judgment principle of failure between contact blocks, and the friction coefficient is assumed to be constant in the whole calculation process. However, it has been confirmed by a host of shear tests that the dynamic friction of rock joints degrades. Therefore, the friction coefficient should be gradually reduced during the numerical simulation of an earthquake-induced rockslide. In this paper, based on the experimental results of cyclic shear tests on limestone joints, exponential regression formulas are fitted for dynamic friction degradation, which is a function of the relative velocity, the amplitude of cyclic shear displacement and the number of its cycles between blocks with an edge-to-edge contact. Then, an improved DDA (IDDA) is developed by implementing the fitting regression formulas and a modified removing technique of joint cohesion, in which the cohesion is removed once the `sliding' or `open' state between blocks appears for the first time, into the ODDA. The IDDA is first validated by comparing with the theoretical solutions of the kinematic behaviors of a sliding block on an inclined plane under dynamic loading. Then, the program is applied to model the Donghekou landslide triggered by the 2008 Wenchuan earthquake in China. The simulation results demonstrate that the dynamic friction degradation of joints has great influences on the runout and velocity of sliding mass. Moreover, the friction coefficient possesses higher impact than the cohesion of joints on the kinematic behaviors of the sliding mass.

  16. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  17. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  18. Conformal Regression for Quantitative Structure-Activity Relationship Modeling-Quantifying Prediction Uncertainty.

    PubMed

    Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas

    2018-05-29

    Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.

  19. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  20. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  1. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  2. Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users

    PubMed Central

    Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.

    2016-01-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347

  3. Comparing exponential and exponentiated models of drug demand in cocaine users.

    PubMed

    Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W

    2016-12-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  5. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  6. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  7. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  8. Efficacy of DL-methionine hydroxy analogue-free acid in comparison to DL-methionine in growing male white Pekin ducks.

    PubMed

    Kluge, H; Gessner, D K; Herzog, E; Eder, K

    2016-03-01

    The present study was performed to assess the bioefficacy of DL-methionine hydroxy analogue-free acid (MHA) in comparison to DL-methionine (DLM) as sources of methionine for growing male white Pekin ducks in the first 3 wk of life. For this aim, 580 1-day-old male ducks were allocated into 12 treatment groups and received a basal diet that contained 0.29% of methionine, 0.34% of cysteine and 0.63% of total sulphur containing amino acids or the same diet supplemented with either DLM or MHA in amounts to supply 0.05, 0.10, 0.15, 0.20, and 0.25% of methionine equivalents. Ducks fed the control diet without methionine supplement had the lowest final body weights, daily body weight gains and feed intake among all groups. Supplementation of methionine improved final body weights and daily body weight gains in a dose dependent-manner. There was, however, no significant effect of the source of methionine on all of the performance responses. Evaluation of the data of daily body weight gains with an exponential model of regression revealed a nearly identical efficacy (slope of the curves) of both compounds for growth (DLM = 100%, MHA = 101%). According to the exponential model of regression, 95% of the maximum values of daily body weight gain were reached at methionine supplementary levels of 0.080% and 0.079% for DLM and MHA, respectively. Overall, the present study indicates that MHA and DLM have a similar efficacy as sources of methionine for growing ducks. It is moreover shown that dietary methionine concentrations of 0.37% are required to reach 95% of the maximum of daily body weight gains in ducks during the first 3 wk of life. © 2015 Poultry Science Association Inc.

  9. Tracking the dispersion of Scaphoideus titanus Ball (Hemiptera: Cicadellidae) from wild to cultivated grapevine: use of a novel mark-capture technique.

    PubMed

    Lessio, F; Tota, F; Alma, A

    2014-08-01

    The dispersion of Scaphoideus titanus Ball adults from wild to cultivated grapevines was studied using a novel mark-capture technique. The crowns of wild grapevines located at a distance from vineyards ranging from 5 to 330 m were sprayed with a water solution of either cow milk (marker: casein) or chicken egg whites (marker: albumin) and insects captured in yellow sticky traps placed on the canopy of grapes were analyzed via an indirect ELISA for markers' identification. Data were subject to exponential regression as a function of distance from wild grapevine, and to spatial interpolation (Inverse Distance Weighted and Kernel interpolation with barriers) using ArcGIS Desktop 10.1 software. The influence of rainfall and time elapsed after marking on markers' effectiveness, and the different dispersion of males and females were studied with regression analyses. Of a total of 5417 insects analyzed, 43% were positive to egg; whereas 18% of 536 tested resulted marked with milk. No influence of rainfall or time elapsed was observed for egg, whereas milk was affected by time. Males and females showed no difference in dispersal. Marked adults decreased exponentially along with distance from wild grapevine and up to 80% of them were captured within 30 m. However, there was evidence of long-range dispersal up to 330 m. The interpolation maps showed a clear clustering of marked S. titanus close to the treated wild grapevine, and the pathways to the vineyards did not always seem to go along straight lines but mainly along ecological corridors. S. titanus adults are therefore capable of dispersing from wild to cultivated grapevine, and this may affect pest management strategies.

  10. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  11. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less

  12. Analysis of Different Hyperspectral Variables for Diagnosing Leaf Nitrogen Accumulation in Wheat.

    PubMed

    Tan, Changwei; Du, Ying; Zhou, Jian; Wang, Dunliang; Luo, Ming; Zhang, Yongjian; Guo, Wenshan

    2018-01-01

    Hyperspectral remote sensing is a rapid non-destructive method for diagnosing nitrogen status in wheat crops. In this study, a quantitative correlation was associated with following parameters: leaf nitrogen accumulation (LNA), raw hyperspectral reflectance, first-order differential hyperspectra, and hyperspectral characteristics of wheat. In this study, integrated linear regression of LNA was obtained with raw hyperspectral reflectance (measurement wavelength = 790.4 nm). Furthermore, an exponential regression of LNA was obtained with first-order differential hyperspectra (measurement wavelength = 831.7 nm). Coefficients ( R 2 ) were 0.813 and 0.847; root mean squared errors (RMSE) were 2.02 g·m -2 and 1.72 g·m -2 ; and relative errors (RE) were 25.97% and 20.85%, respectively. Both the techniques were considered as optimal in the diagnoses of wheat LNA. Nevertheless, the better one was the new normalized variable (SD r - SD b )/(SD r + SD b ) , which was based on vegetation indices of R 2 = 0.935, RMSE = 0.98, and RE = 11.25%. In addition, (SD r - SD b )/(SD r + SD b ) was reliable in the application of a different cultivar or even wheat grown elsewhere. This indicated a superior fit and better performance for (SD r - SD b )/(SD r + SD b ) . For diagnosing LNA in wheat, the newly normalized variable (SD r - SD b )/(SD r + SD b ) was more effective than the previously reported data of raw hyperspectral reflectance, first-order differential hyperspectra, and red-edge parameters.

  13. A method to directly measure maximum volume of fish stomachs or digestive tracts

    USGS Publications Warehouse

    Burley, C.C.; Vigg, S.

    1989-01-01

    A new method for measuring maximum stomach or digestive tract volume of fish incorporates air injection at constant pressure with water displacement to measure directly the internal volume of a stomach or analogous structure. The method was tested with coho salmon, Oncorhynchus kisutch (Walbaum), which has a true stomach, and northern squawfish, Ptychocheilus oregonensis(Richardson), which has a modified foregut as a functional analogue. Both species were collected during July-October 1987 from the Columbia River, U.S.A. Relationships between fish weight (= volume) and maximum volume of the digestive organ were best fitted for coho salmon by an allometric model and for northern squawfish by an exponential model. Least squares regression analysis of individual measurements showed less variability in the volume of coho salmon stomachs (R2= 0.85) than in the total digestive tracts (R2= 0.55) and foreguts (R2= 0.61) of northern squawfish, relative to fish size. Compared to previous methods, the new technique has the advantage of accurately measuring the internal volume of a wide range of digestive organ shapes and sizes.

  14. The potential of non-invasive pre- and post-mortem carcass measurements to predict the contribution of carcass components to slaughter yield of guinea pigs.

    PubMed

    Barba, Lida; Sánchez-Macías, Davinia; Barba, Iván; Rodríguez, Nibaldo

    2018-06-01

    Guinea pig meat consumption is increasing exponentially worldwide. The evaluation of the contribution of carcass components to carcass quality potentially can allow for the estimation of the value added to food animal origin and make research in guinea pigs more practicable. The aim of this study was to propose a methodology for modelling the contribution of different carcass components to the overall carcass quality of guinea pigs by using non-invasive pre- and post mortem carcass measurements. The selection of predictors was developed through correlation analysis and statistical significance; whereas the prediction models were based on Multiple Linear Regression. The prediction results showed higher accuracy in the prediction of carcass component contribution expressed in grams, compared to when expressed as a percentage of carcass quality components. The proposed prediction models can be useful for the guinea pig meat industry and research institutions by using non-invasive and time- and cost-efficient carcass component measuring techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Ionic-to-electronic conductivity of glasses in the P2O5-V2O5-ZnO-Li2O system

    NASA Astrophysics Data System (ADS)

    Langar, A.; Sdiri, N.; Elhouichet, H.; Ferid, M.

    2016-12-01

    Glasses having a composition 15V2O5-5ZnO-(80- x P2O5- xLi2O ( x = 5 , 10, 15 mol%) were prepared by the conventional melt quenching. Conduction and relaxation mechanisms in these glasses were studied using impedance spectroscopy in a frequency range from 10 Hz to 10 MHz and in a temperature range from 513 K to 566 K. The structure of the amorphous synthetic product was corroborated by X-ray diffraction (disappearance of nacrite peaks). The DC conductivity follows the Arrhenius law and the activation energy determined by regression analysis varies with the content of Li2O. Frequency-dependent AC conductivity was analyzed by Jonscher's universal power law, which is varying as ωn, and the temperature-dependent power parameter supported by the Correlated Barrier Hopping (CBH) model. For x = 15 mol%, the values of n ≤ 0.5 confirm the dominance of ionic conductivity. The analysis of the modulus formalism with a distribution of relaxation times was carried out using the Kohlrausch-Williams-Watts (KWW) stretched exponential function. The stretching exponent, β, is dependent on temperature. The analysis of the temperature variation of the M" peak indicates that the relaxation process is thermally activated. Modulus study reveals the temperature-dependent non-Debye-type relaxation phenomenon.

  16. Dysfunction Screening in Experimental Arteriovenous Grafts for Hemodialysis Using Fractional-Order Extractor and Color Relation Analysis.

    PubMed

    Wu, Ming-Jui; Chen, Wei-Ling; Kan, Chung-Dann; Yu, Fan-Ming; Wang, Su-Chin; Lin, Hsiu-Hui; Lin, Chia-Hung

    2015-12-01

    In physical examinations, hemodialysis access stenosis leading to dysfunction occurs at the venous anastomosis site or the outflow vein. Information from the inflow stenosis, such as blood pressure, pressure drop, and flow resistance increases, allows dysfunction screening from the stage of early clots and thrombosis to the progression of outflow stenosis. Therefore, this study proposes dysfunction screening model in experimental arteriovenous grafts (AVGs) using the fractional-order extractor (FOE) and the color relation analysis (CRA). A Sprott system was designed using an FOE to quantify the differences in transverse vibration pressures between the inflow and outflow sites of an AVG. Experimental analysis revealed that the degree of stenosis (DOS) correlated with an increase in fractional-order dynamic errors (FODEs). Exponential regression was used to fit a non-linear curve and can be used to quantify the relationship between the FODEs and DOS (R (2) = 0.8064). The specific ranges were used to evaluate the stenosis degree, such as DOS: <50, 50-80, and >80%. A CRA-based screening method was derived from the hue angle-saturation-value color model, which describes perceptual color relationships for the DOS. It has a flexibility inference manner with color visualization to represent the different stenosis degrees, which has average accuracy >90% superior to the traditional methods. This in vitro experimental study demonstrated that the proposed model can be used for dysfunction screening in stenotic AVGs.

  17. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605

  18. Upper arm circumference development in Chinese children and adolescents: a pooled analysis.

    PubMed

    Tong, Fang; Fu, Tong

    2015-05-30

    Upper arm development in children is different in different ethnic groups. There have been few reports on upper arm circumference (UAC) at different stages of development in children and adolescents in China. The purpose of this study was to provide a reference for growth with weighted assessment of the overall level of development. Using a pooled analysis, an authoritative journal database search and reports of UAC, we created a new database on developmental measures in children. In conducting a weighted analysis, we compared reference values for 0~60 months of development according to the World Health Organization (WHO) statistics considering gender and nationality and used Z values as interval values for the second sampling to obtain an exponential smooth curve to analyze the mean, standard deviation, and sites of attachment. Ten articles were included in the pooled analysis, and these articles included participants from different areas of China. The point of intersection with the WHO curve was 3.5 years with higher values at earlier ages and lower values at older ages. Boys curve was steeper after puberty. The curves in the studies had a merged line compatible. The Z values of exponential smoothing showed the curves were similar for body weight and had a right normal distribution. The integrated index of UAC in Chinese children and adolescents indicated slightly variations with regions. Exponential curve smoothing was suitable for assessment at different developmental stages.

  19. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  20. Flow of 3D Eyring-Powell fluid by utilizing Cattaneo-Christov heat flux model and chemical processes over an exponentially stretching surface

    NASA Astrophysics Data System (ADS)

    Hayat, Tanzila; Nadeem, S.

    2018-03-01

    This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.

  1. CMB constraints on β-exponential inflationary models

    NASA Astrophysics Data System (ADS)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  2. Investigation of the double exponential in the current-voltage characteristics of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Noel, G. T.; Stirn, R. J.

    1976-01-01

    A theoretical analysis is presented of certain peculiarities of the current-voltage characteristics of silicon solar cells, involving high values of the empirical constant A in the diode equation for a p-n junction. An attempt was made in a lab experiment to demonstrate that the saturation current which is associated with the exponential term qV/A2kT of the I-V characteristic, with A2 roughly equal to 2, originates in the space charge region and that it can be increased, as observed on ATS-1 cells, by the introduction of additional defects through low energy proton irradiation. It was shown that the proton irradiation introduces defects into the space charge region which give rise to a recombination current from this region, although the I-V characteristic is, in this case, dominated by an exponential term which has A = 1.

  3. Effects of clustered transmission on epidemic growth Comment on "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Merler, Stefano

    2016-09-01

    Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].

  4. Analysis of the Chinese air route network as a complex network

    NASA Astrophysics Data System (ADS)

    Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin

    2012-02-01

    The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.

  5. Multistability of second-order competitive neural networks with nondecreasing saturated activation functions.

    PubMed

    Nie, Xiaobing; Cao, Jinde

    2011-11-01

    In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.

  6. Shotgun proteomic monitoring of Clostridium acetobutylicum during stationary phase of butanol fermentation using xylose and comparison with the exponential phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivagnanam, Kumaran; Raghavan, Vijaya G. S.; Shah, Manesh B

    2012-01-01

    Economically viable production of solvents through acetone butanol ethanol (ABE) fermentation requires a detailed understanding of Clostridium acetobutylicum. This study focuses on the proteomic profiling of C. acetobutylicum ATCC 824 from the stationary phase of ABE fermentation using xylose and compares with the exponential growth by shotgun proteomics approach. Comparative proteomic analysis revealed 22.9% of the C. acetobutylicum genome and 18.6% was found to be common in both exponential and stationary phases. The proteomic profile of C. acetobutylicum changed during the ABE fermentation such that 17 proteins were significantly differentially expressed between the two phases. Specifically, the expression of fivemore » proteins namely, CAC2873, CAP0164, CAP0165, CAC3298, and CAC1742 involved in the solvent production pathway were found to be significantly lower in the stationary phase compared to the exponential growth. Similarly, the expression of fucose isomerase (CAC2610), xylulose kinase (CAC2612), and a putative uncharacterized protein (CAC2611) involved in the xylose utilization pathway were also significantly lower in the stationary phase. These findings provide an insight into the metabolic behavior of C. acetobutylicum between different phases of ABE fermentation using xylose.« less

  7. Optimal Pulse Configuration Design for Heart Stimulation. A Theoretical, Numerical and Experimental Study.

    NASA Astrophysics Data System (ADS)

    Hardy, Neil; Dvir, Hila; Fenton, Flavio

    Existing pacemakers consider the rectangular pulse to be the optimal form of stimulation current. However, other waveforms for the use of pacemakers could save energy while still stimulating the heart. We aim to find the optimal waveform for pacemaker use, and to offer a theoretical explanation for its advantage. Since the pacemaker battery is a charge source, here we probe the stimulation current waveforms with respect to the total charge delivery. In this talk we present theoretical analysis and numerical simulations of myocyte ion-channel currents acting as an additional source of charge that adds to the external stimulating charge for stimulation purposes. Therefore, we find that as the action potential emerges, the external stimulating current can be reduced accordingly exponentially. We then performed experimental studies in rabbit and cat hearts and showed that indeed exponential truncated pulses with less total charge can still induce activation in the heart. From the experiments, we present curves showing the savings in charge as a function of exponential waveform and we calculated that the longevity of the pacemaker battery would be ten times higher for the exponential current compared to the rectangular waveforms. Thanks to Petit Undergraduate Research Scholars Program and NSF# 1413037.

  8. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  9. Coupled-cluster Green's function: Analysis of properties originating in the exponential parametrization of the ground-state wave function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    In this paper we derive basic properties of the Green’s function matrix elements stemming from the exponential coupled cluster (CC) parametrization of the ground-state wave function. We demon- strate that all intermediates used to express retarded (or equivalently, ionized) part of the Green’s function in the ω-representation can be expressed through connected diagrams only. Similar proper- ties are also shared by the first order ω-derivatives of the retarded part of the CC Green’s function. This property can be extended to any order ω-derivatives of the Green’s function. Through the Dyson equation of CC Green’s function, the derivatives of corresponding CCmore » self-energy can be evaluated analytically. In analogy to the CC Green’s function, the corresponding CC self-energy is expressed in terms of connected diagrams only. Moreover, the ionized part of the CC Green’s func- tion satisfies the non-homogeneous linear system of ordinary differential equations, whose solution may be represented in the exponential form. Our analysis can be easily generalized to the advanced part of the CC Green’s function.« less

  10. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  11. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  12. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant

    PubMed Central

    2013-01-01

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206

  13. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant.

    PubMed

    Banushkina, Polina V; Krivov, Sergei V

    2013-12-10

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.

  14. DICOM structured report to track patient's radiation dose to organs from abdominal CT exam

    NASA Astrophysics Data System (ADS)

    Morioka, Craig; Turner, Adam; McNitt-Gray, Michael; Zankl, Maria; Meng, Frank; El-Saden, Suzie

    2011-03-01

    The dramatic increase of diagnostic imaging capabilities over the past decade has contributed to increased radiation exposure to patient populations. Several factors have contributed to the increase in imaging procedures: wider availability of imaging modalities, increase in technical capabilities, rise in demand by patients and clinicians, favorable reimbursement, and lack of guidelines to control utilization. The primary focus of this research is to provide in depth information about radiation doses that patients receive as a result of CT exams, with the initial investigation involving abdominal CT exams. Current dose measurement methods (i.e. CTDIvol Computed Tomography Dose Index) do not provide direct information about a patient's organ dose. We have developed a method to determine CTDIvol normalized organ doses using a set of organ specific exponential regression equations. These exponential equations along with measured CTDIvol are used to calculate organ dose estimates from abdominal CT scans for eight different patient models. For each patient, organ dose and CTDIvol were estimated for an abdominal CT scan. We then modified the DICOM Radiation Dose Structured Report (RDSR) to store the pertinent patient information on radiation dose to their abdominal organs.

  15. Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model

    NASA Technical Reports Server (NTRS)

    Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)

    1982-01-01

    The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.

  16. A case study demonstration of the soil temperature extrema recovery rates after precipitation cooling at 10-cm soil depth

    NASA Technical Reports Server (NTRS)

    Welker, Jean Edward

    1991-01-01

    Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.

  17. Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks

    PubMed Central

    Richter, Philipp; Toledano-Ayala, Manuel

    2015-01-01

    Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996

  18. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less

  19. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium.

    PubMed

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M

    2017-01-01

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.

  20. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    PubMed Central

    Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M.

    2017-01-01

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future. PMID:29049419

  1. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    DOE PAGES

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; ...

    2017-10-19

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less

  2. Obstructive sleep apnea alters sleep stage transition dynamics.

    PubMed

    Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert

    2010-06-28

    Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.

  3. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  4. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  5. Surrogacy of progression-free survival (PFS) for overall survival (OS) in esophageal cancer trials with preoperative therapy: Literature-based meta-analysis.

    PubMed

    Kataoka, K; Nakamura, K; Mizusawa, J; Kato, K; Eba, J; Katayama, H; Shibata, T; Fukuda, H

    2017-10-01

    There have been no reports evaluating progression-free survival (PFS) as a surrogate endpoint in resectable esophageal cancer. This study was conducted to evaluate the trial level correlations between PFS and overall survival (OS) in resectable esophageal cancer with preoperative therapy and to explore the potential benefit of PFS as a surrogate endpoint for OS. A systematic literature search of randomized trials with preoperative chemotherapy or preoperative chemoradiotherapy for esophageal cancer reported from January 1990 to September 2014 was conducted using PubMed and the Cochrane Library. Weighted linear regression using sample size of each trial as a weight was used to estimate coefficient of determination (R 2 ) within PFS and OS. The primary analysis included trials in which the HR for both PFS and OS was reported. The sensitivity analysis included trials in which either HR or median survival time of PFS and OS was reported. In the sensitivity analysis, HR was estimated from the median survival time of PFS and OS, assuming exponential distribution. Of 614 articles, 10 trials were selected for the primary analysis and 15 for the sensitivity analysis. The primary analysis did not show a correlation between treatment effects on PFS and OS (R 2 0.283, 95% CI [0.00-0.90]). The sensitivity analysis did not show an association between PFS and OS (R 2 0.084, 95% CI [0.00-0.70]). Although the number of randomized controlled trials evaluating preoperative therapy for esophageal cancer is limited at the moment, PFS is not suitable for primary endpoint as a surrogate endpoint for OS. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  6. Radiofrequency ablation: importance of background tissue electrical conductivity--an agar phantom and computer modeling study.

    PubMed

    Solazzo, Stephanie A; Liu, Zhengjun; Lobo, S Melvyn; Ahmed, Muneeb; Hines-Peralta, Andrew U; Lenkinski, Robert E; Goldberg, S Nahum

    2005-08-01

    To determine whether radiofrequency (RF)-induced heating can be correlated with background electrical conductivity in a controlled experimental phantom environment mimicking different background tissue electrical conductivities and to determine the potential electrical and physical basis for such a correlation by using computer modeling. The effect of background tissue electrical conductivity on RF-induced heating was studied in a controlled system of 80 two-compartment agar phantoms (with inner wells of 0.3%, 1.0%, or 36.0% NaCl) with background conductivity that varied from 0.6% to 5.0% NaCl. Mathematical modeling of the relationship between electrical conductivity and temperatures 2 cm from the electrode (T2cm) was performed. Next, computer simulation of RF heating by using two-dimensional finite-element analysis (ETherm) was performed with parameters selected to approximate the agar phantoms. Resultant heating, in terms of both the T2cm and the distance of defined thermal isotherms from the electrode surface, was calculated and compared with the phantom data. Additionally, electrical and thermal profiles were determined by using the computer modeling data and correlated by using linear regression analysis. For each inner compartment NaCl concentration, a negative exponential relationship was established between increased background NaCl concentration and the T2cm (R2= 0.64-0.78). Similar negative exponential relationships (r2 > 0.97%) were observed for the computer modeling. Correlation values (R2) between the computer and experimental data were 0.9, 0.9, and 0.55 for the 0.3%, 1.0%, and 36.0% inner NaCl concentrations, respectively. Plotting of the electrical field generated around the RF electrode identified the potential for a dramatic local change in electrical field distribution (ie, a second electrical peak ["E-peak"]) occurring at the interface between the two compartments of varied electrical background conductivity. Linear correlations between the E-peak and heating at T2cm (R2= 0.98-1.00) and the 50 degrees C isotherm (R2= 0.99-1.00) were established. These results demonstrate the strong relationship between background tissue conductivity and RF heating and further explain electrical phenomena that occur in a two-compartment system.

  7. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  8. Observational constraints on tachyonic chameleon dark energy model

    NASA Astrophysics Data System (ADS)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  9. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  10. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Double slip effects of Magnetohydrodynamic (MHD) boundary layer flow over an exponentially stretching sheet with radiation, heat source and chemical reaction

    NASA Astrophysics Data System (ADS)

    Shaharuz Zaman, Azmanira; Aziz, Ahmad Sukri Abd; Ali, Zaileha Md

    2017-09-01

    The double slips effect on the magnetohydrodynamic boundary layer flow over an exponentially stretching sheet with suction/blowing, radiation, chemical reaction and heat source is presented in this analysis. By using the similarity transformation, the governing partial differential equations of momentum, energy and concentration are transformed into the non-linear ordinary equations. These equations are solved using Runge-Kutta-Fehlberg method with shooting technique in MAPLE software environment. The effects of the various parameter on the velocity, temperature and concentration profiles are graphically presented and discussed.

  12. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  13. Cross-Conjugated Nanoarchitectures

    DTIC Science & Technology

    2013-08-23

    compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns

  14. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    PubMed Central

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  15. A Landsat study of water quality in Lake Okeechobee

    NASA Technical Reports Server (NTRS)

    Gervin, J. C.; Marshall, M. L.

    1976-01-01

    This paper uses multiple regression techniques to investigate the relationship between Landsat radiance values and water quality measurements. For a period of over one year, the Central and Southern Florida Flood Control District sampled the water of Lake Okeechobee for chlorophyll, carotenoids, turbidity, and various nutrients at the time of Landsat overpasses. Using an overlay map of the sampling stations, Landsat radiance values were measured from computer compatible tapes using a GE image 100 and averaging over a 22-acre area at each station. These radiance values in four bands were used to form a number of functions (powers, logarithms, exponentials, and ratios), which were then compared with the ground measurements using multiple linear regression techniques. Several dates were used to provide generality and to study possible seasonal variations. Individual correlations were presented for the various water quality parameters and best fit equations were examined for chlorophyll and turbidity. The results and their relationship to past hydrological research were discussed.

  16. C-Peptide Decline in Type 1 Diabetes Has Two Phases: An Initial Exponential Fall and a Subsequent Stable Phase.

    PubMed

    Shields, Beverley M; McDonald, Timothy J; Oram, Richard; Hill, Anita; Hudson, Michelle; Leete, Pia; Pearson, Ewan R; Richardson, Sarah J; Morgan, Noel G; Hattersley, Andrew T

    2018-06-07

    The decline in C-peptide in the 5 years after diagnosis of type 1 diabetes has been well studied, but little is known about the longer-term trajectory. We aimed to examine the association between log-transformed C-peptide levels and the duration of diabetes up to 40 years after diagnosis. We assessed the pattern of association between urinary C-peptide/creatinine ratio (UCPCR) and duration of diabetes in cross-sectional data from 1,549 individuals with type 1 diabetes using nonlinear regression approaches. Findings were replicated in longitudinal follow-up data for both UCPCR ( n = 161 individuals, 326 observations) and plasma C-peptide ( n = 93 individuals, 473 observations). We identified two clear phases of C-peptide decline: an initial exponential fall over 7 years (47% decrease/year [95% CI -51%, -43%]) followed by a stable period thereafter (+0.07%/year [-1.3, +1.5]). The two phases had similar durations and slopes in patients above and below the median age at diagnosis (10.8 years), although levels were lower in the younger patients irrespective of duration. Patterns were consistent in both longitudinal UCPCR ( n = 162; ≤7 years duration: -48%/year [-55%, -38%]; >7 years duration -0.1% [-4.1%, +3.9%]) and plasma C-peptide ( n = 93; >7 years duration only: -2.6% [-6.7%, +1.5%]). These data support two clear phases of C-peptide decline: an initial exponential fall over a 7-year period, followed by a prolonged stabilization where C-peptide levels no longer decline. Understanding the pathophysiological and immunological differences between these two phases will give crucial insights into understanding β-cell survival. © 2018 by the American Diabetes Association.

  17. Diagnostic delay in psychogenic seizures and the association with anti-seizure medication trials.

    PubMed

    Kerr, Wesley T; Janio, Emily A; Le, Justine M; Hori, Jessica M; Patel, Akash B; Gallardo, Norma L; Bauirjan, Janar; Chau, Andrea M; D'Ambrosio, Shannon R; Cho, Andrew Y; Engel, Jerome; Cohen, Mark S; Stern, John M

    2016-08-01

    The average delay from first seizure to diagnosis of psychogenic non-epileptic seizures (PNES) is over 7 years. The reason for this delay is not well understood. We hypothesized that a perceived decrease in seizure frequency after starting an anti-seizure medication (ASM) may contribute to longer delays, but the frequency of such a response has not been well established. Time from onset to diagnosis, medication history and associated seizure frequency was acquired from the medical records of 297 consecutive patients with PNES diagnosed using video-electroencephalographic monitoring. Exponential regression was used to model the effect of medication trials and response on diagnostic delay. Mean diagnostic delay was 8.4 years (min 1 day, max 52 years). The robust average diagnostic delay was 2.8 years (95% CI: 2.2-3.5 years) based on an exponential model as 10 to the mean of log10 delay. Each ASM trial increased the robust average delay exponentially by at least one third of a year (Wald t=3.6, p=0.004). Response to ASM trials did not significantly change diagnostic delay (Wald t=-0.9, p=0.38). Although a response to ASMs was observed commonly in these patients with PNES, the presence of a response was not associated with longer time until definitive diagnosis. Instead, the number of ASMs tried was associated with a longer delay until diagnosis, suggesting that ASM trials were continued despite lack of response. These data support the guideline that patients with seizures should be referred to epilepsy care centers after failure of two medication trials. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  18. Bioaccumulation of trace metals in octocorals depends on age and tissue compartmentalization

    PubMed Central

    Hwang, Jiang-Shiou; Huang, Ke Li; Huang, Mu-Yeh; Liu, Xue-Jun; Khim, Jong Seong; Wong, Chong Kim

    2018-01-01

    Trace metal dynamics have not been studied with respect to growth increments in octocorals. It is particularly unknown whether ontogenetic compartmentalization of trace metal accumulation is species-specific. We studied here for the first time the intracolonial distribution and concentrations of 18 trace metals in the octocorals Subergorgia suberosa, Echinogorgia complexa and E. reticulata that were retrieved from the northern coast of Taiwan. Levels of trace metals were considerably elevated in corals collected at these particular coral habitats as a result of diverse anthropogenic inputs. There was a significant difference in the concentration of metals among octocorals except for Sn. Both species of Echinogorgia contained significantly higher concentrations of Cu, Zn and Al than Subergorgia suberosa. We used for the first time exponential growth curves that describe an age-specific relationship of octocoral trace metal concentrations of Cu, Zn, Cd, Cr and Pb where the distance from the grip point was reflecting younger age as linear regressions. The larger colony (C7) had a lower accumulation rate constant than the smaller one (C6) for Cu, Zn, Cd, Cr and Pb, while other trace metals showed an opposite trend. The Cu concentration declined exponentially from the grip point, whereas the concentrations of Zn, Cd, Cr and Pb increased exponentially. In S. suberosa and E. reticulata, Zn occurred primarily in coenosarc tissues and Zn concentrations increased with distance from the grip point in both skeletal and coenosarc tissues. Metals which appeared at high concentrations (e.g. Ca, Zn and Fe) generally tended to accumulate in the outer coenosarc tissues, while metals with low concentrations (e.g. V) tended to accumulate in the soft tissues of the inner skeleton. PMID:29684058

  19. Proteomics Analysis of Lactobacillus casei Zhang, a New Probiotic Bacterium Isolated from Traditional Home-made Koumiss in Inner Mongolia of China*

    PubMed Central

    Wu, Rina; Wang, Weiwei; Yu, Dongliang; Zhang, Wenyi; Li, Yan; Sun, Zhihong; Wu, Junrui; Meng, He; Zhang, Heping

    2009-01-01

    Lactobacillus casei Zhang, isolated from traditional home-made koumiss in Inner Mongolia of China, was considered as a new probiotic bacterium by probiotic selection tests. We carried out a proteomics study to identify and characterize proteins expressed by L. casei Zhang in the exponential phase and stationary phase. Cytosolic proteins of the strain cultivated in de Man, Rogosa, and Sharpe broth were resolved by two-dimensional gel electrophoresis using pH 4–7 linear gradients. The number of protein spots quantified from the gels was 487 ± 21 (exponential phase) and 494 ± 13 (stationary phase) among which a total of 131 spots were identified by MALDI-TOF/MS and/or MALDI-TOF/TOF according to significant growth phase-related differences or high expression intensity proteins. Accompanied by the cluster of orthologous groups (COG), codon adaptation index (CAI), and GRAVY value analysis, the study provided a very first insight into the profile of protein expression as a reference map of L. casei. Forty-seven spots were also found in the study that showed statistically significant differences between exponential phase and stationary phase. Thirty-three of the spots increased at least 2.5-fold in the stationary phase in comparison with the exponential phase, including 19 protein spots (e.g. Hsp20, DnaK, GroEL, LuxS, pyruvate kinase, and GalU) whose intensity up-shifted above 3.0-fold. Transcriptional profiles were conducted to confirm several important differentially expressed proteins by using real time quantitative PCR. The analysis suggests that the differentially expressed proteins were mainly categorized as stress response proteins and key components of central and intermediary metabolism, indicating that these proteins might play a potential important role for the adaptation to the surroundings, especially the accumulation of lactic acid in the course of growth, and the physiological processes in bacteria cell. PMID:19508964

  20. The exponential behavior and stabilizability of the stochastic magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Wang, Huaqiao

    2018-06-01

    This paper studies the two-dimensional stochastic magnetohydrodynamic equations which are used to describe the turbulent flows in magnetohydrodynamics. The exponential behavior and the exponential mean square stability of the weak solutions are proved by the application of energy method. Furthermore, we establish the pathwise exponential stability by using the exponential mean square stability. When the stochastic perturbations satisfy certain additional hypotheses, we can also obtain pathwise exponential stability results without using the mean square stability.

  1. Determining the optimal isoleucine:lysine ratio for ten- to twenty-two-kilogram and twenty-four- to thirty-nine-kilogram pigs fed diets containing nonexcess levels of leucine.

    PubMed

    Htoo, J K; Zhu, C L; Huber, L; de Lange, C F M; Quant, A D; Kerr, B J; Cromwell, G L; Lindemann, M D

    2014-08-01

    Three 21-d experiments were conducted to determine the optimum standardized ileal digestible (SID) Ile:Lys ratio in 10- to 22-kg and 24- to 39-kg pigs. In Exp. 1, 144 Yorkshire pigs (initial BW = 10.2 kg) were assigned to 6 diets with 6 pens per treatment. Diets 1 to 5 were formulated to contain 5 graded SID Ile:Lys (44, 51, 57, 63, and 70%), 1.18% SID Leu, and 0.90% SID Lys (second limiting). Diet 6 (diet 5 with added Lys) was formulated (1.06% SID Lys) as a positive control. Pigs fed diet 6 had higher (P < 0.05) ADG and G:F and lower (P < 0.05) plasma urea N (PUN) than pigs fed diet 5 (P < 0.02), indicating that Lys was limiting in diets 1 to 5. Final BW, ADG, and ADFI increased (linear and quadratic, P < 0.05) while G:F and PUN at d 21 were not affected (P > 0.10) by dietary Ile:Lys. Overall, ADG and ADFI were highest for pigs fed diet 2 (51% SID Ile:Lys). In Exp. 2, 216 Yorkshire pigs (initial BW = 9.6 kg) were assigned to 9 diets with 6 pens per treatment. Diets 1 to 4 contained 0.40, 0.47, 0.54, and 0.61% SID Ile, respectively, and 1.21% SID Lys; diets 5 to 8 contained 0.72, 0.84, 0.96, and 1.08% SID Lys, respectively, and 0.68% SID Ile. Diet 9 was high in both Ile and Lys (0.68% SID Ile and 1.21% SID Lys). All diets contained 1.21% SID Leu. The ADG and G:F increased (linear and quadratic, P < 0.05) as SID Ile:Lys increased (diets 1 to 4 and 9). The ADG and G:F increased (linear, P < 0.05) as SID Lys increased (diets 5 to 9). The PUN at d 21 decreased (linear, P < 0.05) by increasing dietary Ile:Lys. The SID Ile:Lys to optimize ADG was 46% by curvilinear plateau or exponential regression. For G:F, the optimal SID Ile:Lys was 47 and 51% by curvilinear plateau and exponential regressions, respectively. In Exp. 3, 80 pigs (PIC 327 × C23; initial BW = 24.0 kg) were allotted to 5 treatments with 4 pigs per pen. Diets 1 to 5 were formulated to contain 5 graded SID Ile:Lys (39, 46, 53, 61, and 68%), 1.17% SID Leu, and 0.91% SID Lys (second limiting). Final BW and ADG increased (linear and quadratic, P < 0.05) and ADFI increased (linear, P = 0.047) as SID Ile:Lys increased. Using ADG and G:F, the optimum SID Ile:Lys was 54 and 53%, respectively, by curvilinear plateau and exponential regression. The PUN was minimized at 53 and 59% SID Ile:Lys by curvilinear plateau and broken line regression. Overall, the average optimum SID Ile:Lys was approximately 51% for 10- to 22-kg pigs and 54% for 24- to 39-kg pigs fed diets containing nonexcess levels of Leu.

  2. Determination of osteoporosis risk factors using a multiple logistic regression model in postmenopausal Turkish women.

    PubMed

    Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal

    2005-09-01

    To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.

  3. Mathematical Modeling of Extinction of Inhomogeneous Populations

    PubMed Central

    Karev, G.P.; Kareva, I.

    2016-01-01

    Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117

  4. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  5. Post-test probability for neonatal hyperbilirubinemia based on umbilical cord blood bilirubin, direct antiglobulin test, and ABO compatibility results.

    PubMed

    Peeters, Bart; Geerts, Inge; Van Mullem, Mia; Micalessi, Isabel; Saegeman, Veroniek; Moerman, Jan

    2016-05-01

    Many hospitals opt for early postnatal discharge of newborns with a potential risk of readmission for neonatal hyperbilirubinemia. Assays/algorithms with the possibility to improve prediction of significant neonatal hyperbilirubinemia are needed to optimize screening protocols and safe discharge of neonates. This study investigated the predictive value of umbilical cord blood (UCB) testing for significant hyperbilirubinemia. Neonatal UCB bilirubin, UCB direct antiglobulin test (DAT), and blood group were determined, as well as the maternal blood group and the red blood cell antibody status. Moreover, in newborns with clinically apparent jaundice after visual assessment, plasma total bilirubin (TB) was measured. Clinical factors positively associated with UCB bilirubin were ABO incompatibility, positive DAT, presence of maternal red cell antibodies, alarming visual assessment and significant hyperbilirubinemia in the first 6 days of life. UCB bilirubin performed clinically well with an area under the receiver-operating characteristic curve (AUC) of 0.82 (95 % CI 0.80-0.84). The combined UCB bilirubin, DAT, and blood group analysis outperformed results of these parameters considered separately to detect significant hyperbilirubinemia and correlated exponentially with hyperbilirubinemia post-test probability. Post-test probabilities for neonatal hyperbilirubinemia can be calculated using exponential functions defined by UCB bilirubin, DAT, and ABO compatibility results. • The diagnostic value of the triad umbilical cord blood bilirubin measurement, direct antiglobulin testing and blood group analysis for neonatal hyperbilirubinemia remains unclear in literature. • Currently no guideline recommends screening for hyperbilirubinemia using umbilical cord blood. What is New: • Post-test probability for hyperbilirubinemia correlated exponentially with umbilical cord blood bilirubin in different risk groups defined by direct antiglobulin test and ABO blood group compatibility results. • Exponential functions can be used to calculate hyperbilirubinemia probability.

  6. Growth and differentiation of human lens epithelial cells in vitro on matrix

    NASA Technical Reports Server (NTRS)

    Blakely, E. A.; Bjornstad, K. A.; Chang, P. Y.; McNamara, M. P.; Chang, E.; Aragon, G.; Lin, S. P.; Lui, G.; Polansky, J. R.

    2000-01-01

    PURPOSE: To characterize the growth and maturation of nonimmortalized human lens epithelial (HLE) cells grown in vitro. METHODS: HLE cells, established from 18-week prenatal lenses, were maintained on bovine corneal endothelial (BCE) extracellular matrix (ECM) in medium supplemented with basic fibroblast growth factor (FGF-2). The identity, growth, and differentiation of the cultures were characterized by karyotyping, cell morphology, and growth kinetics studies, reverse transcription-polymerase chain reaction (RT-PCR), immunofluorescence, and Western blot analysis. RESULTS: HLE cells had a male, human diploid (2N = 46) karyotype. The population-doubling time of exponentially growing cells was 24 hours. After 15 days in culture, cell morphology changed, and lentoid formation was evident. Reverse transcription-polymerase chain reaction (RT-PCR) indicated expression of alphaA- and betaB2-crystallin, fibroblast growth factor receptor 1 (FGFR1), and major intrinsic protein (MIP26) in exponential growth. Western analyses of protein extracts show positive expression of three immunologically distinct classes of crystallin proteins (alphaA-, alphaB-, and betaB2-crystallin) with time in culture. By Western blot analysis, expression of p57(KIP2), a known marker of terminally differentiated fiber cells, was detectable in exponential cultures, and levels increased after confluence. MIP26 and gamma-crystallin protein expression was detected in confluent cultures, by using immunofluorescence, but not in exponentially growing cells. CONCLUSIONS: HLE cells can be maintained for up to 4 months on ECM derived from BCE cells in medium containing FGF-2. With time in culture, the cells demonstrate morphologic characteristics of, and express protein markers for, lens fiber cell differentiation. This in vitro model will be useful for investigations of radiation-induced cataractogenesis and other studies of lens toxicity.

  7. A colloquium on the influence of versatile class of saturable nonlinear responses in the instability induced supercontinuum generation

    NASA Astrophysics Data System (ADS)

    Nithyanandan, K.; Vasantha Jayakantha Raja, R.; Porsezian, K.; Uthayakumar, T.

    2013-08-01

    We investigate the modulational instability induced supercontinuum generation (MI-SCG) under versatile saturable nonlinear (SNL) responses. We identify and discuss the salient features of saturable nonlinear responses of various functional forms such as exponential, conventional and coupled type on modulational instability (MI) and the subsequent supercontinuum (SC) process. Firstly, we analyze the impact of SNL on the MI spectrum and found both analytically and numerically that MI gain and bandwidth is maximum for exponential nonlinearity in comparison to other types of SNL's. We also reported the unique behavior of the SNL system in the MI dynamics. Following the MI analysis, the proceeding section deals with the supercontinuum generation (SCG) process by virtue of MI. We examine exclusively the impact of each form of SNL on the SC spectrum and predicted numerically that exponential case attains the phase matching earlier and thus enable to achieve broad spectrum at a relatively shorter distance of propagation than the other cases of SNL's. Thus a direct evidence of SCG from MI is emphasized and the impact of SNL in MI-SCG is highlighted. To analyze the quality of the output continuum spectrum, we performed the coherence analysis for MI-SCG in the presence of SNL.

  8. Proteome analysis to assess physiological changes in Escherichia coli grown under glucose-limited fed-batch conditions.

    PubMed

    Raman, Babu; Nandakumar, M P; Muthuvijayan, Vignesh; Marten, Mark R

    2005-11-05

    Proteome analysis was used to compare global protein expression changes in Escherichia coli fermentation between exponential and glucose-limited fed-batch phase. Two-dimensional gel electrophoresis and MALDI-TOF mass spectrometry were used to separate and identify 49 proteins showing >2-fold difference in expression. Proteins upregulated during exponential phase include ribonucleotide biosynthesis enzymes and ribosomal recycling factor. Proteins upregulated during fed-batch phase include those involved in high-affinity glucose uptake, transport and degradation of alternate carbon sources and TCA cycle, suggesting an enhanced role of the cycle under glucose- and energy-limited conditions. We report the upregulation of several putative proteins (ytfQ, ygiS, ynaF, yggX, yfeX), not identified in any previous study under carbon-limited conditions. Copyright (c) 2005 Wiley Periodicals, Inc.

  9. Impact of inhomogeneity on SH-type wave propagation in an initially stressed composite structure

    NASA Astrophysics Data System (ADS)

    Saha, S.; Chattopadhyay, A.; Singh, A. K.

    2018-02-01

    The present analysis has been made on the influence of distinct form of inhomogeneity in a composite structure comprised of double superficial layers lying over a half-space, on the phase velocity of SH-type wave propagating through it. Propagation of SH-type wave in the said structure has been examined in four distinct cases of inhomogeneity viz. when inhomogeneity in double superficial layer is due to exponential variation in density only (Case I); when inhomogeneity in double superficial layers is due to exponential variation in rigidity only (Case II); when inhomogeneity in double superficial layer is due to exponential variation in rigidity, density and initial stress (Case III) and when inhomogeneity in double superficial layer is due to linear variation in rigidity, density and initial stress (Case IV). Closed-form expression of dispersion relation has been accomplished for all four aforementioned cases through extensive application of Debye asymptotic analysis. Deduced dispersion relations for all the cases are found in well-agreement to the classical Love-wave equation. Numerical computation has been carried out to graphically demonstrate the effect of inhomogeneity parameters, initial stress parameters as well as width ratio associated with double superficial layers in the composite structure for each of the four aforesaid cases on dispersion curve. Meticulous examination of distinct cases of inhomogeneity and initial stress in context of considered problem has been carried out with detailed analysis in a comparative approach.

  10. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  11. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  12. Induction of a global stress response during the first step of Escherichia coli plate growth.

    PubMed

    Cuny, Caroline; Lesbats, Maïalène; Dukan, Sam

    2007-02-01

    We have investigated the first events that occur when exponentially grown cells are transferred from a liquid medium (Luria-Bertani [LB]) to a solid medium (LB agar [LBA]). We observed an initial lag phase of 180 min for the wild type MG1655 without any apparent growth. This lack of growth was independent of the bacterial physiological state (either the stationary or the exponential phase), the solid medium composition, or the number of cells on the plate, but it was dependent on the bacterial genotype. Using lacZ-reporter fusions and two-dimensional electrophoresis analysis, we observed that when cells from exponential-phase cultures were plated on LBA, several global regulons, like heat shock regulons (RpoH, RpoE, CpxAR) and oxidative-stress regulons (SoxRS, OxyR, Fur), were immediately induced. Our results indicate that in order to grow on plates, bacteria must not only adapt to new conditions but also perceive a real stress.

  13. Fracture analysis of a central crack in a long cylindrical superconductor with exponential model

    NASA Astrophysics Data System (ADS)

    Zhao, Yu Feng; Xu, Chi

    2018-05-01

    The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.

  14. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  15. Modulation of lens cell adhesion molecules by particle beams

    NASA Technical Reports Server (NTRS)

    McNamara, M. P.; Bjornstad, K. A.; Chang, P. Y.; Chou, W.; Lockett, S. J.; Blakely, E. A.

    2001-01-01

    Cell adhesion molecules (CAMs) are proteins which anchor cells to each other and to the extracellular matrix (ECM), but whose functions also include signal transduction, differentiation, and apoptosis. We are testing a hypothesis that particle radiations modulate CAM expression and this contributes to radiation-induced lens opacification. We observed dose-dependent changes in the expression of beta 1-integrin and ICAM-1 in exponentially-growing and confluent cells of a differentiating human lens epithelial cell model after exposure to particle beams. Human lens epithelial (HLE) cells, less than 10 passages after their initial culture from fetal tissue, were grown on bovine corneal endothelial cell-derived ECM in medium containing 15% fetal bovine serum and supplemented with 5 ng/ml basic fibroblast growth factor (FGF-2). Multiple cell populations at three different stages of differentiation were prepared for experiment: cells in exponential growth, and cells at 5 and 10 days post-confluence. The differentiation status of cells was characterized morphologically by digital image analysis, and biochemically by Western blotting using lens epithelial and fiber cell-specific markers. Cultures were irradiated with single doses (4, 8 or 12 Gy) of 55 MeV protons and, along with unirradiated control samples, were fixed using -20 degrees C methanol at 6 hours after exposure. Replicate experiments and similar experiments with helium ions are in progress. The intracellular localization of beta 1-integrin and ICAM-1 was detected by immunofluorescence using monoclonal antibodies specific for each CAM. Cells known to express each CAM were also processed as positive controls. Both exponentially-growing and confluent, differentiating cells demonstrated a dramatic proton-dose-dependent modulation (upregulation for exponential cells, downregulation for confluent cells) and a change in the intracellular distribution of the beta 1-integrin, compared to unirradiated controls. In contrast, there was a dose-dependent increase in ICAM-1 immunofluorescence in confluent, but not exponentially-growing cells. These results suggest that proton irradiation downregulates beta 1-integrin and upregulates ICAM-1, potentially contributing to cell death or to aberrant differentiation via modulation of anchorage and/or signal transduction functions. Quantification of the expression levels of the CAMs by Western analysis is in progress.

  16. Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments

    NASA Astrophysics Data System (ADS)

    Ben Shabat, Yael; Shitzer, Avraham

    2012-07-01

    Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s-1. Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit ( r 2 > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.

  17. Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments.

    PubMed

    Ben Shabat, Yael; Shitzer, Avraham

    2012-07-01

    Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s(-1). Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit (r(2) > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.

  18. Geomorphic effectiveness of long profile shape and role of inherent geological controls, Ganga River Basin, India

    NASA Astrophysics Data System (ADS)

    Sonam, Sonam; Jain, Vikrant

    2017-04-01

    River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.

  19. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  20. New class of control laws for robotic manipulators. I - Nonadaptive case. II - Adaptive case

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1988-01-01

    A new class of exponentially stabilizing control laws for joint level control of robot arms is discussed. Closed-loop exponential stability has been demonstrated for both the set point and tracking control problems by a slight modification of the energy Lyapunov function and the use of a lemma which handles third-order terms in the Lyapunov function derivatives. In the second part, these control laws are adapted in a simple fashion to achieve asymptotically stable adaptive control. The analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and uses a parameterization based on physical (time-invariant) quantities.

  1. Detecting electroporation by assessing the time constants in the exponential response of human skin to voltage controlled impulse electrical stimulation.

    PubMed

    Bîrlea, Sinziana I; Corley, Gavin J; Bîrlea, Nicolae M; Breen, Paul P; Quondamatteo, Fabio; OLaighin, Gearóid

    2009-01-01

    We propose a new method for extracting the electrical properties of human skin based on the time constant analysis of its exponential response to impulse stimulation. As a result of this analysis an adjacent finding has arisen. We have found that stratum corneum electroporation can be detected using this analysis method. We have observed that a one time-constant model is appropriate for describing the electrical properties of human skin at low amplitude applied voltages (<30V), and a two time-constant model best describes skin electrical properties at higher amplitude applied voltages (>30V). Higher voltage amplitudes (>30V) have been proven to create pores in the skin's stratum corneum which offer a new, lower resistance, pathway for the passage of current through the skin. Our data shows that when pores are formed in the stratum corneum they can be detected, in-vivo, due to the fact that a second time constant describes current flow through them.

  2. Operational modal analysis applied to the concert harp

    NASA Astrophysics Data System (ADS)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  3. Effect of algae and water on water color shift

    NASA Astrophysics Data System (ADS)

    Yang, Shengguang; Xia, Daying; Yang, Xiaolong; Zhao, Jun

    1991-03-01

    This study showed that the combined effect of absorption of planktonic algae and water on water color shift can be simulated approximately by the exponential function: Log( E {100cm/ W }+ E {100cm/ Xch1})=0.002λ-2.5 where E {100/cm W }, E {100cm/ Xchl} are, respectively, extinction coefficients of seawater and chlorophyll—a (concentration is equal to X mg/m3), and λ (nm) is wavelength. This empirical regression equation is very useful for forecasting the relation between water color and biomass in water not affected by terrigenous material. The main factor affecting water color shift in the ocean should be the absorption of blue light by planktonic algae.

  4. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  5. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Continuous Toxicological Dose-Response Relationships Are Pretty Homogeneous (Society for Risk Analysis Annual Meeting)

    EPA Science Inventory

    Dose-response relationships for a wide range of in vivo and in vitro continuous datasets are well-described by a four-parameter exponential or Hill model, based on a recent analysis of multiple historical dose-response datasets, mostly with more than five dose groups (Slob and Se...

  7. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  8. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  9. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  10. Interactive effects of temperature, pH, and water activity on the growth kinetics of Shiga toxin-producing Escherichia coli O104:H4 3.

    PubMed

    Juneja, Vijay K; Mukhopadhyay, Sudarsan; Ukuku, Dike; Hwang, Cheng-An; Wu, Vivian C H; Thippareddi, Harshavardhan

    2014-05-01

    The risk of non-O157 Shiga toxin-producing Escherichia coli strains has become a growing public health concern. Several studies characterized the behavior of E. coli O157:H7; however, no reports on the influence of multiple factors on E. coli O104:H4 are available. This study examined the effects and interactions of temperature (7 to 46°C), pH (4.5 to 8.5), and water activity (aw ; 0.95 to 0.99) on the growth kinetics of E. coli O104:H4 and developed predictive models to estimate its growth potential in foods. Growth kinetics studies for each of the 23 variable combinations from a central composite design were performed. Growth data were used to obtain the lag phase duration (LPD), exponential growth rate, generation time, and maximum population density (MPD). These growth parameters as a function of temperature, pH, and aw as controlling factors were analyzed to generate second-order response surface models. The results indicate that the observed MPD was dependent on the pH, aw, and temperature of the growth medium. Increasing temperature resulted in a concomitant decrease in LPD. Regression analysis suggests that temperature, pH, and aw significantly affect the LPD, exponential growth rate, generation time, and MPD of E. coli O104:H4. A comparison between the observed values and those of E. coli O157:H7 predictions obtained by using the U. S. Department of Agriculture Pathogen Modeling Program indicated that E. coli O104:H4 grows faster than E. coli O157:H7. The developed models were validated with alfalfa and broccoli sprouts. These models will provide risk assessors and food safety managers a rapid means of estimating the likelihood that the pathogen, if present, would grow in response to the interaction of the three variables assessed.

  11. Increased rates of authorship in radiology publications: a bibliometric analysis of 142,576 articles published worldwide by radiologists between 1991 and 2012.

    PubMed

    Chow, Daniel S; Ha, Richard; Filippi, Christopher G

    2015-01-01

    OBJECTIVE; There is evidence in academic medicine that the number of authors per paper has increased over time. The goal of this study was to quantitatively analyze authorship trends in the field of radiology over 20 years. A search of the National Library of Medicine MEDLINE database was conducted to identify articles published by radiology departments between 1991 and 2012. Country of origin, article study design, and journal impact factor were recorded. The increase in number of authors per paper was assessed by linear and nonlinear regression. Pearson correlation was used to assess the relation between journal impact factor and number of authors. A total of 142,576 articles and 699,257 authors were identified during the study period. The mean number of authors per paper displayed linear growth from 3.9 to 5.7 (p < 0.0001). The proportion of single authors declined from 11% in 1991 to 4.4% in 2012. The number of clinical trials increased in a linear pattern, review articles in an exponential pattern, and case reports in a logistic pattern (p < 0.0001 for each). Countries with the highest number of authors per paper were Japan, Italy, and Germany. The number of articles funded by the U.S. National Institutes of Health (NIH) displayed exponential growth and of non-NIH-funded articles displayed linear growth (p < 0.0001 for each). A negligible relation was observed between journal impact factor and number of authors (Pearson r = 0.1066). Radiology has had a steady increase in mean number of authors per paper since the early 1990s that has varied by study design. The increase is probably multi-factorial and includes components of author inflation and increasing complexity of research. Findings support the need for reemphasis of authorship criteria to preserve authorship value and accountability.

  12. Publication trends of shared decision making in 15 high impact medical journals: a full-text review with bibliometric analysis

    PubMed Central

    2014-01-01

    Background Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. Methods We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase “shared decision making” or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. Results We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). Conclusion This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community. PMID:25106844

  13. Translucency of dental ceramics with different thicknesses.

    PubMed

    Wang, Fu; Takahashi, Hidekazu; Iwasaki, Naohiko

    2013-07-01

    The increased use of esthetic restorations requires an improved understanding of the translucent characteristics of ceramic materials. Ceramic translucency has been considered to be dependent on composition and thickness, but less information is available about the translucent characteristics of these materials, especially at different thicknesses. The purpose of this study was to investigate the relationship between translucency and the thickness of different dental ceramics. Six disk-shaped specimens of 8 glass ceramics (IPS e.max Press HO, MO, LT, HT, IPS e.max CAD LT, MO, AvanteZ Dentin, and Trans) and 5 specimens of 5 zirconia ceramics (Cercon Base, Zenotec Zr Bridge, Lava Standard, Lava Standard FS3, and Lava Plus High Translucency) were prepared following the manufacturers' instructions and ground to a predetermined thickness with a grinding machine. A spectrophotometer was used to measure the translucency parameters (TP) of the glass ceramics, which ranged from 2.0 to 0.6 mm, and of the zirconia ceramics, which ranged from 1.0 to 0.4 mm. The relationship between the thickness and TP of each material was evaluated using a regression analysis (α=.05). The TP values of the glass ceramics ranged from 2.2 to 25.3 and the zirconia ceramics from 5.5 to 15.1. There was an increase in the TP with a decrease in thickness, but the amount of change was material dependent. An exponential relationship with statistical significance (P<.05) between the TP and thickness was found for both glass ceramics and zirconia ceramics. The translucency of dental ceramics was significantly influenced by both material and thickness. The translucency of all materials increased exponentially as the thickness decreased. All of the zirconia ceramics evaluated in the present study showed some degree of translucency, which was less sensitive to thickness compared to that of the glass ceramics. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  14. Modeling time-series data from microbial communities.

    PubMed

    Ridenhour, Benjamin J; Brooker, Sarah L; Williams, Janet E; Van Leuven, James T; Miller, Aaron W; Dearing, M Denise; Remien, Christopher H

    2017-11-01

    As sequencing technologies have advanced, the amount of information regarding the composition of bacterial communities from various environments (for example, skin or soil) has grown exponentially. To date, most work has focused on cataloging taxa present in samples and determining whether the distribution of taxa shifts with exogenous covariates. However, important questions regarding how taxa interact with each other and their environment remain open thus preventing in-depth ecological understanding of microbiomes. Time-series data from 16S rDNA amplicon sequencing are becoming more common within microbial ecology, but methods to infer ecological interactions from these longitudinal data are limited. We address this gap by presenting a method of analysis using Poisson regression fit with an elastic-net penalty that (1) takes advantage of the fact that the data are time series; (2) constrains estimates to allow for the possibility of many more interactions than data; and (3) is scalable enough to handle data consisting of thousands of taxa. We test the method on gut microbiome data from white-throated woodrats (Neotoma albigula) that were fed varying amounts of the plant secondary compound oxalate over a period of 22 days to estimate interactions between OTUs and their environment.

  15. Growth models of Rhizophora mangle L. seedlings in tropical southwestern Atlantic

    NASA Astrophysics Data System (ADS)

    Lima, Karen Otoni de Oliveira; Tognella, Mônica Maria Pereira; Cunha, Simone Rabelo; Andrade, Humber Agrelli de

    2018-07-01

    The present study selected and compared regression models that best describe the growth curves of Rhizophora mangle seedlings based on the height (cm) and time (days) variables. The Linear, Exponential, Power Law, Monomolecular, Logistic, and Gompertz models were adjusted with non-linear formulations and minimization of the sum of the squares of the residues. The Akaike Information Criterion was used to select the best model for each seedling. After this selection, the determination coefficient, which evaluates how well a model describes height variation as a time function, was inspected. Differing from the classic population ecology studies, the Monomolecular, Three-parameter Logistic, and Gompertz models presented the best performance in describing growth, suggesting they are the most adequate options for long-term studies. The different growth curves reflect the complexity of stem growth at the seedling stage for R. mangle. The analysis of the joint distribution of the parameters initial height, growth rate, and, asymptotic size allowed the study of the species ecological attributes and to observe its intraspecific variability in each model. Our results provide a basis for interpretation of the dynamics of seedlings growth during their establishment in a mature forest, as well as its regeneration processes.

  16. Characteristics of pulsed runoff-erosion events under typical rainstorms in a small watershed on the Loess Plateau of China.

    PubMed

    Wu, Lei; Jiang, Jun; Li, Gou-Xia; Ma, Xiao-Yi

    2018-02-27

    The pulsed events of rainstorm erosion on the Loess Plateau are well-known, but little information is available concerning the characteristics of superficial soil erosion processes caused by heavy rainstorms at the watershed scale. This study statistically evaluated characteristics of pulsed runoff-erosion events based on 17 observed rainstorms from 1997-2010 in a small loess watershed on the Loess Plateau of China. Results show that: 1) Rainfall is the fundamental driving force of soil erosion on hillslopes, but the correlations of rainfall-runoff and rainfall-sediment in different rainstorms are often scattered due to infiltration-excess runoff and soil conservation measures. 2) Relationships between runoff and sediment for each rainstorm event can be regressed by linear, power, logarithmic and exponential functions. Cluster Analysis is helpful in classifying runoff-erosion events and formulating soil conservation strategies for rainstorm erosion. 3) Response characteristics of sediment yield are different in different levels of pulsed runoff-erosion events. Affected by rainfall intensity and duration, large changes may occur in the interactions between flow and sediment for different flood events. Results provide new insights into runoff-erosion processes and will assist soil conservation planning in the loess hilly region.

  17. Estimation of liver T₂ in transfusion-related iron overload in patients with weighted least squares T₂ IDEAL.

    PubMed

    Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H

    2012-01-01

    MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.

  18. Localized normalization for improved calibration curves of manganese and zinc in laser-induced plasma spectroscopy

    NASA Astrophysics Data System (ADS)

    Sabri, Nursalwanie Mohd; Haider, Zuhaib; Tufail, Kashif; Imran, Muhammad; Ali, Jalil

    2017-03-01

    Laser-induced plasma spectroscopy is performed to determine the elemental compositions of manganese and zinc in potassium bromide (KBr) matrix. This work has utilized Q-switched Nd:YAG laser installed in LIBS2500plus system at fundamental wavelength. The pelletized sample were ablated in air with maximum laser energy of 650 mJ for different gate delays ranging from 0-18 µs. The spectra of samples are obtained for five different compositions containing preferred spectral lines. The intensity of spectral line is observed at its maximum at a gate-delay 0.83 µs and subsequently decayed exponentially with the increasing of gate delay. Maximum signal-to-background ratio of Mn and Zn were found at gate delays of 7.92 and 7.50 µs, respectively. Initial calibration curves show bad data fitting, whereas the locally normalized intensity for both spectral lines shows enhancement since it is more linearly regressed. This study will give a better understanding in studying the plasma emission and the spectra analysis. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 24 May 2017.

  19. The γ parameter of the stretched-exponential model is influenced by internal gradients: validation in phantoms.

    PubMed

    Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia

    2012-03-01

    In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    PubMed

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  1. Effect of the state of internal boundaries on granite fracture nature under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.

    2017-05-01

    Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.

  2. Trends in Lung Cancer Incidence in Delhi, India 1988-2012: Age-Period-Cohort and Joinpoint Analyses

    PubMed

    Malhotra, Rajeev Kumar; Manoharan, Nalliah; Nair, Omana; Deo, Suryanarayana; Rath, Goura Kishor

    2018-06-25

    Introduction: Lung cancer (LC) has been one of the most commonly diagnosed cancers worldwide, both in terms of new cases and mortality. Exponential growth of economic and industrial activities in recent decades in the Delhi urban area may have increased the incidence of LC. The primary objective of this study was to evaluate the time trend according to gender. Method: LC incidence data over 25 years were obtained from the population based urban Delhi cancer registry. Joinpoint regression analysis was applied for evaluating the time trend of age-standardized incidence rates. The age-period-cohort (APC) model was employed using Poisson distribution with a log link function and the intrinsic estimator method. Results: During the 25 years, 13,489 male and 3,259 female LC cases were registered, accounting for 9.78% of male and 2.53% of female total cancer cases. Joinpoint regression analysis revealed that LC incidence in males continued to increase during the entire period, a sharp acceleration being observed starting from 2009. In females the LC incidence rate remained a plateau during 1988-2002 and thereafter increased. The cumulative risks for 1988-2012 were 1.79% and 0.45%. The full APC (IE) model showed best fit for an age-period-cohort effect on LC incidence, with significant increase with age peaking at 70-74 years in males and 65-69 years in females. A rising period effect was observed after adjusting for age and cohort effects in both genders and a declining cohort effect was identified after controlling for age and period effects. Conclusion: The incidence of LC in urban Delhi showed increasing trend from 1988-2012. Known factors such as environmental conservation, tobacco control, physical activity awareness and medical security should be implemented more vigorously over the long term in our population. Creative Commons Attribution License

  3. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  4. Photoacoustic signal attenuation analysis for the assessment of thin layers thickness in paintings

    NASA Astrophysics Data System (ADS)

    Tserevelakis, George J.; Dal Fovo, Alice; Melessanaki, Krystalia; Fontana, Raffaella; Zacharakis, Giannis

    2018-03-01

    This study introduces a novel method for the thickness estimation of thin paint layers in works of art, based on photoacoustic signal attenuation analysis (PAcSAA). Ad hoc designed samples with acrylic paint layers (Primary Red Magenta, Cadmium Yellow, Ultramarine Blue) of various thicknesses on glass substrates were realized for the specific application. After characterization by Optical Coherence Tomography imaging, samples were irradiated at the back side using low energy nanosecond laser pulses of 532 nm wavelength. Photoacoustic waves undergo a frequency-dependent exponential attenuation through the paint layer, before being detected by a broadband ultrasonic transducer. Frequency analysis of the recorded time-domain signals allows for the estimation of the average transmitted frequency function, which shows an exponential decay with the layer thickness. Ultrasonic attenuation models were obtained for each pigment and used to fit the data acquired on an inhomogeneous painted mock-up simulating a real canvas painting. Thickness evaluation through PAcSAA resulted in excellent agreement with cross-section analysis with a conventional brightfield microscope. The results of the current study demonstrate the potential of the proposed PAcSAA method for the non-destructive stratigraphic analysis of painted artworks.

  5. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), III. Re-computed MS and mb, proxy MW, final magnitude composition and completeness assessment

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James

    2015-02-01

    This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the entire catalogue length.

  6. Environmental factors controlling spatial variation in sediment yield in a central Andean mountain area

    NASA Astrophysics Data System (ADS)

    Molina, Armando; Govers, Gerard; Poesen, Jean; Van Hemelryck, Hendrik; De Bièvre, Bert; Vanacker, Veerle

    2008-06-01

    A large spatial variability in sediment yield was observed from small streams in the Ecuadorian Andes. The objective of this study was to analyze the environmental factors controlling these variations in sediment yield in the Paute basin, Ecuador. Sediment yield data were calculated based on sediment volumes accumulated behind checkdams for 37 small catchments. Mean annual specific sediment yield (SSY) shows a large spatial variability and ranges between 26 and 15,100 Mg km - 2 year - 1 . Mean vegetation cover (C, fraction) in the catchment, i.e. the plant cover at or near the surface, exerts a first order control on sediment yield. The fractional vegetation cover alone explains 57% of the observed variance in ln(SSY). The negative exponential relation (SSY = a × e- b C) which was found between vegetation cover and sediment yield at the catchment scale (10 3-10 9 m 2), is very similar to the equations derived from splash, interrill and rill erosion experiments at the plot scale (1-10 3 m 2). This affirms the general character of an exponential decrease of sediment yield with increasing vegetation cover at a wide range of spatial scales, provided the distribution of cover can be considered to be essentially random. Lithology also significantly affects the sediment yield, and explains an additional 23% of the observed variance in ln(SSY). Based on these two catchment parameters, a multiple regression model was built. This empirical regression model already explains more than 75% of the total variance in the mean annual sediment yield. These results highlight the large potential of revegetation programs for controlling sediment yield. They show that a slight increase in the overall fractional vegetation cover of degraded land is likely to have a large effect on sediment production and delivery. Moreover, they point to the importance of detailed surface vegetation data for predicting and modeling sediment production rates.

  7. On the Singularity Structure of WKB Solution of the Boosted Whittaker Equation: its Relevance to Resurgent Functions with Essential Singularities

    NASA Astrophysics Data System (ADS)

    Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya

    2016-12-01

    Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.

  8. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  9. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    ERIC Educational Resources Information Center

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  10. Theory, computation, and application of exponential splines

    NASA Technical Reports Server (NTRS)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  11. An Exponential Growth Learning Trajectory: Students' Emerging Understanding of Exponential Growth through Covariation

    ERIC Educational Resources Information Center

    Ellis, Amy B.; Ozgur, Zekiye; Kulow, Torrey; Dogan, Muhammed F.; Amidon, Joel

    2016-01-01

    This article presents an Exponential Growth Learning Trajectory (EGLT), a trajectory identifying and characterizing middle grade students' initial and developing understanding of exponential growth as a result of an instructional emphasis on covariation. The EGLT explicates students' thinking and learning over time in relation to a set of tasks…

  12. Exponentially varying viscosity of magnetohydrodynamic mixed convection Eyring-Powell nanofluid flow over an inclined surface

    NASA Astrophysics Data System (ADS)

    Khan, Imad; Fatima, Sumreen; Malik, M. Y.; Salahuddin, T.

    2018-03-01

    This paper explores the theoretical study of the steady incompressible two dimensional MHD boundary layer flow of Eyring-Powell nanofluid over an inclined surface. The fluid is considered to be electrically conducting and the viscosity of the fluid is assumed to be varying exponentially. The governing partial differential equations (PDE's) are reduced into ordinary differential equations (ODE's) by applying similarity approach. The resulting ordinary differential equations are solved successfully by using Homotopy analysis method. The impact of pertinent parameters on velocity, concentration and temperature profiles are examined through graphs and tables. Also coefficient of skin friction, Sherwood and Nusselt numbers are illustrated in tabular and graphical form.

  13. Non-exponential resistive switching in Ag2S memristors: a key to nanometer-scale non-volatile memory devices.

    PubMed

    Gubicza, Agnes; Csontos, Miklós; Halbritter, András; Mihály, György

    2015-03-14

    The dynamics of resistive switchings in nanometer-scale metallic junctions formed between an inert metallic tip and an Ag film covered by a thin Ag2S layer are investigated. Our thorough experimental analysis and numerical simulations revealed that the resistance change upon a switching bias voltage pulse exhibits a strongly non-exponential behaviour yielding markedly different response times at different bias levels. Our results demonstrate the merits of Ag2S nanojunctions as nanometer-scale non-volatile memory cells with stable switching ratios, high endurance as well as fast response to write/erase, and an outstanding stability against read operations at technologically optimal bias and current levels.

  14. Multiple types of synchronization analysis for discontinuous Cohen-Grossberg neural networks with time-varying delays.

    PubMed

    Li, Jiarong; Jiang, Haijun; Hu, Cheng; Yu, Zhiyong

    2018-03-01

    This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen-Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  16. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  17. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth J.; Torres, Roberto; Crow, Wade T.; Bennett, Marvin E.

    2017-09-01

    This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI). These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined). All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI) that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T). We examined five different eras (1997-2002; 2002-2005; 2005-2008; 2008-2011; 2011-2014) that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM) program (25 cm), Soil Climate Analysis Network (SCAN; 20.32 cm), SNOwpack TELemetry (SNOTEL; 20.32 cm), and the US Climate Reference Network (USCRN; 20 cm). We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM). Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5). The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE) and calculation based on the normalized difference vegetation index (NDVI) value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally outperformed NDVI-based estimates. The best results were noted at stations that had an absolute bias within 10 %. SWI estimates were more impacted by the in situ network than the surface satellite product used to drive the exponential filter. The average Nash-Sutcliffe coefficients (NSs) for ARM ranged from -0. 1 to 0.3 and were similar to the results obtained from the USCRN network (0.2-0.3). NS values from the SCAN and SNOTEL networks were slightly higher (0.1-0.5). These results indicated that this approach had some skill in providing an estimate of RZSM. In terms of RMSE (in volumetric soil moisture), ARM values actually outperformed those from other networks (0.02-0.04). SCAN and USCRN RMSE average values ranged from 0.04 to 0.06 and SNOTEL average RMSE values were higher (0.05-0.07). These values were close to 0.04, which is the baseline value for accuracy designated for many satellite soil moisture missions.

  18. Check the Lambert-Beer-Bouguer law: a simple trick to boost the confidence of students toward both exponential laws and the discrete approach to experimental physics

    NASA Astrophysics Data System (ADS)

    Di Capua, R.; Offi, F.; Fontana, F.

    2014-07-01

    Exponential decay is a prototypical functional behaviour for many physical phenomena, and therefore it deserves great attention in physics courses at an academic level. The absorption of the electromagnetic radiation that propagates in a dissipative medium provides an example of the decay of light intensity, as stated by the law of Lambert-Beer-Bourguer. We devised a very simple experiment to check this law. The experimental setup, its realization, and the data analysis of the experiment are definitely simple. Our main goal was to create an experiment that is accessible to all students, including those in their first year of academic courses and those with poorly equipped laboratories. As illustrated in this paper, our proposal allowed us to develop a deep discussion about some general mathematical and numerical features of exponential decay. Furthermore, the special setup of the absorbing medium (sliced in finite thickness slabs) and the experimental outcomes allow students to understand the transition from the discrete to the continuum approach in experimental physics.

  19. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  20. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  1. Use of Continuous Exponential Families to Link Forms via Anchor Tests. Research Report. ETS RR-11-11

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Yan, Duanli

    2011-01-01

    Continuous exponential families are applied to linking test forms via an internal anchor. This application combines work on continuous exponential families for single-group designs and work on continuous exponential families for equivalent-group designs. Results are compared to those for kernel and equipercentile equating in the case of chained…

  2. A Fourier analysis for a fast simulation algorithm. [for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  3. Interrupted infusion of echocardiographic contrast as a basis for accurate measurement of myocardial perfusion: ex vivo validation and analysis procedures.

    PubMed

    Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor

    2005-12-01

    Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.

  4. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  5. Kinetic and Mechanistic Examination of Acid–Base Bifunctional Aminosilica Catalysts in Aldol and Nitroaldol Condensations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collier, Virginia E.; Ellebracht, Nathan C.; Lindy, George I.

    The kinetic and mechanistic understanding of cooperatively catalyzed aldol and nitroaldol condensations is probed using a series of mesoporous silicas functionalized with aminosilanes to provide bifunctional acid–base character. Mechanistically, a Hammett analysis is performed to determine the effects of electron-donating and electron-withdrawing groups of para-substituted benzaldehyde derivatives on the catalytic activity of each condensation reaction. This information is also used to discuss the validity of previously proposed catalytic mechanisms and to propose a revised mechanism with plausible reaction intermediates. For both reactions, electron-withdrawing groups increase the observed rates of reaction, though resonance effects play an important, yet subtle, role inmore » the nitroaldol condensation, in which a p-methoxy electron-donating group is also able to stabilize the proposed carbocation intermediate. Additionally, activation energies and pre-exponential factors are calculated via the Arrhenius analysis of two catalysts with similar amine loadings: one catalyst had silanols available for cooperative interactions (acid–base catalysis), while the other was treated with a silanol-capping reagent to prevent such cooperativity (base-only catalysis). The values obtained for activation energies and pre-exponential factors in each reaction are discussed in the context of the proposed mechanisms and the importance of cooperative interactions in each reaction. The catalytic activity decreases for all reactions when the silanols are capped with trimethylsilyl groups, and higher temperatures are required to make accurate rate measurements, emphasizing the vital role the weakly acidic silanols play in the catalytic cycles. The results indicate that loss of acid sites is more detrimental to the catalytic activity of the aldol condensation than the nitroaldol condensation, as evidenced by the significant decrease in the pre-exponential factor for the aldol condensation when silanols are unavailable for cooperative interactions. Cooperative catalysis is evidenced by significant changes in the pre-exponential factor, rather than the activation energy for the aldol condensation.« less

  6. Kinetic and Mechanistic Examination of Acid–Base Bifunctional Aminosilica Catalysts in Aldol and Nitroaldol Condensations

    DOE PAGES

    Collier, Virginia E.; Ellebracht, Nathan C.; Lindy, George I.; ...

    2015-12-09

    The kinetic and mechanistic understanding of cooperatively catalyzed aldol and nitroaldol condensations is probed using a series of mesoporous silicas functionalized with aminosilanes to provide bifunctional acid–base character. Mechanistically, a Hammett analysis is performed to determine the effects of electron-donating and electron-withdrawing groups of para-substituted benzaldehyde derivatives on the catalytic activity of each condensation reaction. This information is also used to discuss the validity of previously proposed catalytic mechanisms and to propose a revised mechanism with plausible reaction intermediates. For both reactions, electron-withdrawing groups increase the observed rates of reaction, though resonance effects play an important, yet subtle, role inmore » the nitroaldol condensation, in which a p-methoxy electron-donating group is also able to stabilize the proposed carbocation intermediate. Additionally, activation energies and pre-exponential factors are calculated via the Arrhenius analysis of two catalysts with similar amine loadings: one catalyst had silanols available for cooperative interactions (acid–base catalysis), while the other was treated with a silanol-capping reagent to prevent such cooperativity (base-only catalysis). The values obtained for activation energies and pre-exponential factors in each reaction are discussed in the context of the proposed mechanisms and the importance of cooperative interactions in each reaction. The catalytic activity decreases for all reactions when the silanols are capped with trimethylsilyl groups, and higher temperatures are required to make accurate rate measurements, emphasizing the vital role the weakly acidic silanols play in the catalytic cycles. The results indicate that loss of acid sites is more detrimental to the catalytic activity of the aldol condensation than the nitroaldol condensation, as evidenced by the significant decrease in the pre-exponential factor for the aldol condensation when silanols are unavailable for cooperative interactions. Cooperative catalysis is evidenced by significant changes in the pre-exponential factor, rather than the activation energy for the aldol condensation.« less

  7. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  8. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  9. Joint Improvised Explosive Device Defeat Organization

    DTIC Science & Technology

    2009-01-01

    searches increased exponentially. Palantir . Developed to provide C-IED network analysts with a collaborative link analysis tool, Palantir is used for...share data between teams and between other link analysis applications. Palantir outputs portray linked nodal networks, histogram data, and timeline...views. During FY 2008, the Palantir system was accessed by over 160 people investigating IED networks. Analyses by these people supported over

  10. Distinguishing Response Conflict and Task Conflict in the Stroop Task: Evidence from Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Hubner, Ronald

    2009-01-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…

  11. A regression model for calculating the second dimension retention index in comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry.

    PubMed

    Wang, Bing; Shen, Hao; Fang, Aiqin; Huang, De-Shuang; Jiang, Changjun; Zhang, Jun; Chen, Peng

    2016-06-17

    Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) system has become a key analytical technology in high-throughput analysis. Retention index has been approved to be helpful for compound identification in one-dimensional gas chromatography, which is also true for two-dimensional gas chromatography. In this work, a novel regression model was proposed for calculating the second dimension retention index of target components where n-alkanes were used as reference compounds. This model was developed to depict the relationship among adjusted second dimension retention time, temperature of the second dimension column and carbon number of n-alkanes by an exponential nonlinear function with only five parameters. Three different criteria were introduced to find the optimal values of parameters. The performance of this model was evaluated using experimental data of n-alkanes (C7-C31) at 24 temperatures which can cover all 0-6s adjusted retention time area. The experimental results show that the mean relative error between predicted adjusted retention time and experimental data of n-alkanes was only 2%. Furthermore, our proposed model demonstrates a good extrapolation capability for predicting adjusted retention time of target compounds which located out of the range of the reference compounds in the second dimension adjusted retention time space. Our work shows the deviation was less than 9 retention index units (iu) while the number of alkanes were added up to 5. The performance of our proposed model has also been demonstrated by analyzing a mixture of compounds in temperature programmed experiments. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Appendectomy in patients with human immunodeficiency virus: Not as bad as we once thought.

    PubMed

    Smith, Michael C; Chung, Paul J; Constable, Yohannes C; Boylan, Matthew R; Alfonso, Antonio E; Sugiyama, Gainosuke

    2017-04-01

    The number of patients living with human immunodeficiency virus and acquired immunodeficiency syndrome is growing due to advances in antiretroviral therapy. Existing literature on appendectomy within this patient population has been limited by small sample sizes. Therefore, we used a large, multiyear, nationwide database to study this topic comprehensively. Using the Nationwide Inpatient Sample, we identified 338,805 patients between 2005 and 2012 who underwent laparoscopic or open appendectomy for acute appendicitis. Interval appendectomies were excluded. We used multivariable adjusted regression models to test differences between patients with human immunodeficiency virus without acquired immunodeficiency syndrome and a reference group, as well as human immunodeficiency virus with acquired immunodeficiency syndrome and a reference group, with regard to duration of stay, hospital charges, in-hospital complications, and in-hospital mortality. Models were adjusted for patient age, sex, race, insurance, socioeconomic status, Elixhauser comorbidity score, and appendix perforation. There were 1,291 (0.38%) patients with human immunodeficiency virus, among which 497 (0.15%) patients had acquired immunodeficiency syndrome. In regression analysis, human immunodeficiency virus alone was not associated with adverse outcomes, while acquired immunodeficiency syndrome alone was associated with longer duration of stay (incidence rate ratio 1.40 [1.37-1.57 95% confidence interval], P < .0001), increased total charges (exponentiated coefficient 1.16 [1.10-1.23 95% confidence interval], P < .0001), and increased risk of postoperative infection (odds ratio 2.12 [1.44-3.13 95% confidence interval], P = .0002). Patients with acquired immunodeficiency syndrome who undergo appendectomy for acute appendicitis are subject to longer and more expensive hospital admissions and have greater rates of postoperative infections while patients with human immunodeficiency virus alone are not at risk for adverse outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Modelling seasonal variations in presentations at a paediatric emergency department.

    PubMed

    Takase, Miyuki; Carlin, John

    2012-09-01

    Overcrowding is a phenomenon commonly observed at emergency departments (EDs) in many hospitals, and negatively impacts patients, healthcare professionals and organisations. Health care organisations are expected to act proactively to cope with a high patient volume by understanding and predicting the patterns of ED presentations. The aim of this study was, therefore, to identify the patterns of patient flow at a paediatric ED in order to assist the management of EDs. Data for ED presentations were collected from the Royal Children's Hospital in Melbourne, Australia, with the time-frame of July 2003 to June 2008. A linear regression analysis with trigonometric functions was used to identify the pattern of patient flow at the ED. The results showed that a logarithm of the daily average ED presentations was increasing exponentially (as explained by 0.004t + 0.00005t2 with t representing time, p<0.001). The model also indicated that there was a yearly oscillation in the frequency of ED presentations, in which lower frequencies were observed in summer and higher frequencies during winter (as explained by -0.046 sin(2(pi)t/12)-0.083 cos(2(pi)t/12), p<0.001). In addition, the variation of the oscillations was increasing over time (as explained by -0.002t*sin(2(pi)t/12)-0.001t*cos(2(pi)t/12), p<0.05). The identified regression model explained a total of 96% of the variance in the pattern of ED presentations. This model can be used to understand the trend of the current patient flow as well as to predict the future flow at the ED. Such an understanding will assist health care managers to prepare resources and environment more effectively to cope with overcrowding.

  14. Recovery of Vestibulo-Ocular Reflex Symmetry After an Acute Unilateral Peripheral Vestibular Deficit: Time Course and Correlation With Canal Paresis.

    PubMed

    Allum, John H J; Cleworth, T; Honegger, Flurin

    2016-07-01

    We investigated how response asymmetries and deficit side response amplitudes for head accelerations used clinically to test the vestibular ocular reflex (VOR) are correlated with caloric canal paresis (CP) values. 30 patients were examined at onset of an acute unilateral peripheral vestibular deficit (aUPVD) and 3, 6, and 13 weeks later with three different VOR tests: caloric, rotating chair (ROT), and video head impulse tests (vHIT). Response changes over time were fitted with an exponential decay model and compared with using linear regression analysis. Recovery times (to within 10% of steady state) were similar for vHIT-asymmetry and CP (>10 weeks) but shorter for ROT asymmetry (<4 weeks). Regressions with CP were similar (vHIT asymmetry, R = 0.68, ROT, R = 0.62). Responses to the deficit side were also equally well correlated with CP values (R = 0.71). Specificity for vHIT and 20 degrees/s ROT deficit side responses was 100% in comparison to CP values, sensitivity was 74% for vHIT, 75% for ROT. A decrease in normal side responses occurred for ROT but not for vHIT at 3 weeks. Normal side responses were weekly correlated with CP for ROT (R = 0.49) but not for vHIT (R = 0.17). These results indicate that vHIT deficit side VOR gains are slightly better correlated with CP values than ROT, probably because of similar recovery time courses of vHIT and caloric responses and the lack of normal side vHIT changes. However, specificity and sensitivity is the same for vHIT and ROT tests.

  15. Equivalences between nonuniform exponential dichotomy and admissibility

    NASA Astrophysics Data System (ADS)

    Zhou, Linfeng; Lu, Kening; Zhang, Weinian

    2017-01-01

    Relationship between exponential dichotomies and admissibility of function classes is a significant problem for hyperbolic dynamical systems. It was proved that a nonuniform exponential dichotomy implies several admissible pairs of function classes and conversely some admissible pairs were found to imply a nonuniform exponential dichotomy. In this paper we find an appropriate admissible pair of classes of Lyapunov bounded functions which is equivalent to the existence of nonuniform exponential dichotomy on half-lines R± separately, on both half-lines R± simultaneously, and on the whole line R. Additionally, the maximal admissibility is proved in the case on both half-lines R± simultaneously.

  16. On the origin of non-exponential fluorescence decays in enzyme-ligand complex

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Jakub; Kierdaszuk, Borys

    2004-05-01

    Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).

  17. Spatial and Temporal Characteristics of Insulator Contaminations Revealed by Daily Observations of Equivalent Salt Deposit Density

    PubMed Central

    Ruan, Ling; Han, Ge; Zhu, Zhongmin; Zhang, Miao; Gong, Wei

    2015-01-01

    The accurate estimation of deposits adhering on insulators is of great significance to prevent pollution flashovers which cause huge costs worldwide. Researchers have developed sensors using different technologies to monitor insulator contamination on a fine time scale. However, there is lack of analysis of these data to reveal spatial and temporal characteristics of insulator contamination, and as a result the scheduling of periodical maintenance of power facilities is highly dependent on personal experience. Owing to the deployment of novel sensors, daily Equivalent Salt Deposit Density (ESDD) observations of over two years were collected and analyzed for the first time. Results from 16 sites distributed in four regions of Hubei demonstrated that spatial heterogeneity can be seen at both the fine and coarse geographical scales, suggesting that current polluted area maps are necessary but are not sufficient conditions to guide the maintenance of power facilities. Both the local emission and the regional air pollution condition exert evident influences on deposit accumulation. A relationship between ESDD and PM10 was revealed by using regression analysis, proving that air pollution exerts influence on pollution accumulations on insulators. Moreover, the seasonality of ESDD was discovered for the first time by means of time series analysis, which could help engineers select appropriate times to clean the contamination. Besides, the trend component shows that the ESDD increases in a negative exponential fashion with the accumulation date (ESDD = a − b × exp(−time)) at a long time scale in real environments. PMID:25643058

  18. Analysis of risk factors in severity of rural truck crashes.

    DOT National Transportation Integrated Search

    2016-04-01

    Trucks are a vital part of the logistics system in North Dakota. Recent energy developments have : generated exponential growth in the demand for truck services. With increased density of trucks in the : traffic mix, it is reasonable to expect some i...

  19. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  20. Bayesian regression analyses of radiation modality effects on pericardial and pleural effusion and survival in esophageal cancer.

    PubMed

    He, Liru; Chapple, Andrew; Liao, Zhongxing; Komaki, Ritsuko; Thall, Peter F; Lin, Steven H

    2016-10-01

    To evaluate radiation modality effects on pericardial effusion (PCE), pleural effusion (PE) and survival in esophageal cancer (EC) patients. We analyzed data from 470 EC patients treated with definitive concurrent chemoradiotherapy (CRT). Bayesian semi-competing risks (SCR) regression models were fit to assess effects of radiation modality and prognostic covariates on the risks of PCE and PE, and death either with or without these preceding events. Bayesian piecewise exponential regression models were fit for overall survival, the time to PCE or death, and the time to PE or death. All models included propensity score as a covariate to correct for potential selection bias. Median times to onset of PCE and PE after RT were 7.1 and 6.1months for IMRT, and 6.5 and 5.4months for 3DCRT, respectively. Compared to 3DCRT, the IMRT group had significantly lower risks of PE, PCE, and death. The respective probabilities of a patient being alive without either PCE or PE at 3-years and 5-years were 0.29 and 0.21 for IMRT compared to 0.13 and 0.08 for 3DCRT. In the SCR regression analyses, IMRT was associated with significantly lower risks of PCE (HR=0.26) and PE (HR=0.49), and greater overall survival (probability of beneficial effect (pbe)>0.99), after controlling for known clinical prognostic factors. IMRT reduces the incidence and postpones the onset of PCE and PE, and increases survival probability, compared to 3DCRT. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Deep learning for biomarker regression: application to osteoporosis and emphysema on chest CT scans

    NASA Astrophysics Data System (ADS)

    González, Germán.; Washko, George R.; San José Estépar, Raúl

    2018-03-01

    Introduction: Biomarker computation using deep-learning often relies on a two-step process, where the deep learning algorithm segments the region of interest and then the biomarker is measured. We propose an alternative paradigm, where the biomarker is estimated directly using a regression network. We showcase this image-tobiomarker paradigm using two biomarkers: the estimation of bone mineral density (BMD) and the estimation of lung percentage of emphysema from CT scans. Materials and methods: We use a large database of 9,925 CT scans to train, validate and test the network for which reference standard BMD and percentage emphysema have been already computed. First, the 3D dataset is reduced to a set of canonical 2D slices where the organ of interest is visible (either spine for BMD or lungs for emphysema). This data reduction is performed using an automatic object detector. Second, The regression neural network is composed of three convolutional layers, followed by a fully connected and an output layer. The network is optimized using a momentum optimizer with an exponential decay rate, using the root mean squared error as cost function. Results: The Pearson correlation coefficients obtained against the reference standards are r = 0.940 (p < 0.00001) and r = 0.976 (p < 0.00001) for BMD and percentage emphysema respectively. Conclusions: The deep-learning regression architecture can learn biomarkers from images directly, without indicating the structures of interest. This approach simplifies the development of biomarker extraction algorithms. The proposed data reduction based on object detectors conveys enough information to compute the biomarkers of interest.

  2. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  3. Detecting biodiversity hotspots by species-area relationships: a case study of Mediterranean beetles.

    PubMed

    Fattorini, Simone

    2006-08-01

    Any method of identifying hotspots should take into account the effect of area on species richness. I examined the importance of the species-area relationship in determining tenebrionid (Coleoptera: Tenebrionidae) hotspots on the Aegean Islands (Greece). Thirty-two islands and 170 taxa (species and subspecies) were included in this study. I tested several species-area relationship models with linear and nonlinear regressions, including power exponential, negative exponential, logistic, Gompertz, Weibull, Lomolino, and He-Legendre functions. Islands with positive residuals were identified as hotspots. I also analyzed the values of the C parameter of the power function and the simple species-area ratios. Species richness was significantly correlated with island area for all models. The power function model was the most convenient one. Most functions, however identified certain islands as hotspots. The importance of endemics in insular biotas should be evaluated carefully because they are of high conservation concern. The simple use of the species-area relationship can be problematic when areas with no endemics are included. Therefore the importance of endemics should be evaluated according to different methods, such as percentages, to take into account different levels of endemism and different kinds of "endemics" (e.g., endemic to single islands vs. endemic to the archipelago). Because the species-area relationship is a key pattern in ecology, my findings can be applied at broader scales.

  4. Optical and luminescence properties of Dy3+ ions in phosphate based glasses

    NASA Astrophysics Data System (ADS)

    Rasool, Sk. Nayab; Rama Moorthy, L.; Jayasankar, C. K.

    2013-08-01

    Phosphate glasses with compositions of 44P2O5 + 17K2O + 9Al2O3 + (30 - x)CaF2 + xDy2O3 (x = 0.05, 0.1, 0.5, 1.0, 2.0, 3.0 and 4.0 mol %) were prepared and characterized by X-ray diffraction (XRD), differential thermal analysis (DTA), Fourier transform infrared (FTIR), optical absorption, emission and decay measurements. The observed absorption bands were analyzed by using the free-ion Hamiltonian (HFI) model. The Judd-Ofelt (JO) analysis has been performed and the intensity parameters (Ωλ, λ = 2, 4, 6) were evaluated in order to predict the radiative properties of the excited states. From the emission spectra, the effective band widths (Δλeff), stimulated emission cross-sections (σ(λp)), yellow to blue (Y/B) intensity ratios and chromaticity color coordinates (x, y) have been determined. The fluorescence decays from the 4F9/2 level of Dy3+ ions were measured by monitoring the intense 4F9/2 → 6H15/2 transition (486 nm). The experimental lifetimes (τexp) are found to decrease with the increase of Dy3+ ions concentration due to the quenching process. The decay curves are perfectly single exponential at lower concentrations and gradually changes to non-exponential for higher concentrations. The non-exponential decay curves are well fitted to the Inokuti-Hirayama (IH) model for S = 6, which indicates that the energy transfer between the donor and acceptor is of dipole-dipole type. The systematic analysis of revealed that the energy transfer mechanism strongly depends on Dy3+ ions concentration and the host glass composition.

  5. An exact formulation of the time-ordered exponential using path-sums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giscard, P.-L., E-mail: p.giscard1@physics.ox.ac.uk; Lui, K.; Thwaite, S. J.

    2015-05-15

    We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitudemore » of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures.« less

  6. Strain energy release rates of composite interlaminar end-notch and mixed-mode fracture: A sublaminate/ply level analysis and a computer code

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Chamis, C. C.

    1987-01-01

    A computer code is presented for the sublaminate/ply level analysis of composite structures. This code is useful for obtaining stresses in regions affected by delaminations, transverse cracks, and discontinuities related to inherent fabrication anomalies, geometric configurations, and loading conditions. Particular attention is focussed on those layers or groups of layers (sublaminates) which are immediately affected by the inherent flaws. These layers are analyzed as homogeneous bodies in equilibrium and in isolation from the rest of the laminate. The theoretical model used to analyze the individual layers allows the relevant stresses and displacements near discontinuities to be represented in the form of pure exponential-decay-type functions which are selected to eliminate the exponential-precision-related difficulties in sublaminate/ply level analysis. Thus, sublaminate analysis can be conducted without any restriction on the maximum number of layers, delaminations, transverse cracks, or other types of discontinuities. In conjunction with the strain energy release rate (SERR) concept and composite micromechanics, this computational procedure is used to model select cases of end-notch and mixed-mode fracture specimens. The computed stresses are in good agreement with those from a three-dimensional finite element analysis. Also, SERRs compare well with limited available experimental data.

  7. Computational analysis of plane and parabolic flow of MHD Carreau fluid with buoyancy and exponential heat source effects

    NASA Astrophysics Data System (ADS)

    Krishna, P. Mohan; Sandeep, N.; Sharma, Ram Prakash

    2017-05-01

    This paper presents the two-dimensional magnetohydrodynamic Carreau fluid flow over a plane and parabolic regions in the form of buoyancy and exponential heat source effects. Soret and Dufour effects are used to examine the heat and mass transfer process. The system of ODE's is obtained by utilizing similarity transformations. The RK-based shooting process is employed to generate the numerical solutions. The impact of different parameters of interest on fluid flow, concentration and thermal fields is characterized graphically. Tabular results are presented to discuss the wall friction, reduced Nusselt and Sherwood numbers. It is seen that the flow, thermal and concentration boundary layers of the plane and parabolic flows of Carreau fluid are non-uniform.

  8. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    PubMed

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  9. Nonlinear analogue of the May−Wigner instability transition

    PubMed Central

    Fyodorov, Yan V.; Khoruzhenko, Boris A.

    2016-01-01

    We study a system of N≫1 degrees of freedom coupled via a smooth homogeneous Gaussian vector field with both gradient and divergence-free components. In the absence of coupling, the system is exponentially relaxing to an equilibrium with rate μ. We show that, while increasing the ratio of the coupling strength to the relaxation rate, the system experiences an abrupt transition from a topologically trivial phase portrait with a single equilibrium into a topologically nontrivial regime characterized by an exponential number of equilibria, the vast majority of which are expected to be unstable. It is suggested that this picture provides a global view on the nature of the May−Wigner instability transition originally discovered by local linear stability analysis. PMID:27274077

  10. How exponential are FREDs?

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.; Dyson, Samuel E.

    1996-08-01

    A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.

  11. PREdator: a python based GUI for data analysis, evaluation and fitting

    PubMed Central

    2014-01-01

    The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.

  12. [Application of exponential smoothing method in prediction and warning of epidemic mumps].

    PubMed

    Shi, Yun-ping; Ma, Jia-qi

    2010-06-01

    To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.

  13. Model application niche analysis: Assessing the transferability and generalizability of ecological models

    EPA Science Inventory

    The use of models by ecologist and environmental managers, to inform environmental management and decision-making, has grown exponentially in the past 50 years. Due to logistical, economical and theoretical benefits, model users are frequently transferring preexisting models to n...

  14. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  15. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  16. Investigation of stickiness influence in the anomalous transport and diffusion for a non-dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.

    2018-02-01

    We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.

  17. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  18. A model of canopy photosynthesis incorporating protein distribution through the canopy and its acclimation to light, temperature and CO2

    PubMed Central

    Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce

    2010-01-01

    Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273

  19. Increased intra-individual reaction time variability in attention-deficit/hyperactivity disorder across response inhibition tasks with different cognitive demands.

    PubMed

    Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H

    2009-10-01

    One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.

  20. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  1. Observation and analysis of self-amplified spontaneous emission at the APS low-energy undulator test line

    NASA Astrophysics Data System (ADS)

    Arnold, N. D.; Attig, J.; Banks, G.; Bechtold, R.; Beczek, K.; Benson, C.; Berg, S.; Berg, W.; Biedron, S. G.; Biggs, J. A.; Borland, M.; Boerste, K.; Bosek, M.; Brzowski, W. R.; Budz, J.; Carwardine, J. A.; Castro, P.; Chae, Y.-C.; Christensen, S.; Clark, C.; Conde, M.; Crosbie, E. A.; Decker, G. A.; Dejus, R. J.; DeLeon, H.; Den Hartog, P. K.; Deriy, B. N.; Dohan, D.; Dombrowski, P.; Donkers, D.; Doose, C. L.; Dortwegt, R. J.; Edwards, G. A.; Eidelman, Y.; Erdmann, M. J.; Error, J.; Ferry, R.; Flood, R.; Forrestal, J.; Freund, H.; Friedsam, H.; Gagliano, J.; Gai, W.; Galayda, J. N.; Gerig, R.; Gilmore, R. L.; Gluskin, E.; Goeppner, G. A.; Goetzen, J.; Gold, C.; Gorski, A. J.; Grelick, A. E.; Hahne, M. W.; Hanuska, S.; Harkay, K. C.; Harris, G.; Hillman, A. L.; Hogrefe, R.; Hoyt, J.; Huang, Z.; Jagger, J. M.; Jansma, W. G.; Jaski, M.; Jones, S. J.; Keane, R. T.; Kelly, A. L.; Keyser, C.; Kim, K.-J.; Kim, S. H.; Kirshenbaum, M.; Klick, J. H.; Knoerzer, K.; Koldenhoven, R. J.; Knott, M.; Labuda, S.; Laird, R.; Lang, J.; Lenkszus, F.; Lessner, E. S.; Lewellen, J. W.; Li, Y.; Lill, R. M.; Lumpkin, A. H.; Makarov, O. A.; Markovich, G. M.; McDowell, M.; McDowell, W. P.; McNamara, P. E.; Meier, T.; Meyer, D.; Michalek, W.; Milton, S. V.; Moe, H.; Moog, E. R.; Morrison, L.; Nassiri, A.; Noonan, J. R.; Otto, R.; Pace, J.; Pasky, S. J.; Penicka, J. M.; Pietryla, A. F.; Pile, G.; Pitts, C.; Power, J.; Powers, T.; Putnam, C. C.; Puttkammer, A. J.; Reigle, D.; Reigle, L.; Ronzhin, D.; Rotela, E. R.; Russell, E. F.; Sajaev, V.; Sarkar, S.; Scapino, J. C.; Schroeder, K.; Seglem, R. A.; Sereno, N. S.; Sharma, S. K.; Sidarous, J. F.; Singh, O.; Smith, T. L.; Soliday, R.; Sprau, G. A.; Stein, S. J.; Stejskal, B.; Svirtun, V.; Teng, L. C.; Theres, E.; Thompson, K.; Tieman, B. J.; Torres, J. A.; Trakhtenberg, E. M.; Travish, G.; Trento, G. F.; Vacca, J.; Vasserman, I. B.; Vinokurov, N. A.; Walters, D. R.; Wang, J.; Wang, X. J.; Warren, J.; Wesling, S.; Weyer, D. L.; Wiemerslage, G.; Wilhelmi, K.; Wright, R.; Wyncott, D.; Xu, S.; Yang, B.-X.; Yoder, W.; Zabel, R. B.

    2001-12-01

    Exponential growth of self-amplified spontaneous emission at 530 nm was first experimentally observed at the Advanced Photon Source low-energy undulator test line in December 1999. Since then, further detailed measurements and analysis of the results have been made. Here, we present the measurements and compare these with calculations based on measured electron beam properties and theoretical expectations.

  2. Social and organizational factors affecting implementation of evidence-informed practice in a public health department in Ontario: a network modelling approach

    PubMed Central

    2014-01-01

    Objective The objective of this study is to develop a statistical model to assess factors associated with information seeking in a Canadian public health department. Methods Managers and professional consultants of a public health department serving a large urban population named whom they turned to for help, whom they considered experts in evidence-informed practice, and whom they considered friends. Multilevel regression analysis and exponential random graph modeling were used to predict the formation of information seeking and expertise-recognition connections by personal characteristics of the seeker and source, and the structural attributes of the social networks. Results The respondents were more likely to recognize the members of the supervisory/administrative division as experts. The extent to which an individual implemented evidence-based practice (EBP) principles in daily practice was a significant predictor of both being an information source and being recognized as expert by peers. Friendship was a significant predictor of both information seeking and expertise-recognition connections. Conclusion The analysis showed a communication network segregated by organizational divisions. Managers were identified frequently as information sources, even though this is not a part of their formal role. Self-perceived implementation of EBP in practice was a significant predictor of being an information source or an expert, implying a positive atmosphere towards implementation of evidence-informed decision making in this public health organization. Results also implied that the perception of accessibility and trust were significant predictors of expertise recognition. PMID:24565228

  3. Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.

    PubMed

    Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John

    2015-12-01

    Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.

  4. Quark mixing and exponential form of the Cabibbo-Kobayashi-Maskawa matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukovsky, K. V., E-mail: zhukovsk@phys.msu.ru; Dattoli, D., E-mail: dattoli@frascati.enea.i

    2008-10-15

    Various forms of representation of the mixing matrix are discussed. An exponential parametrization e{sup A} of the Cabibbo-Kobayashi-Maskawa matrix is considered in the context of the unitarity requirement, this parametrization being the most general form of the mixing matrix. An explicit representation for the exponential mixing matrix in terms of the first and second degrees of the matrix A exclusively is obtained. This representation makes it possible to calculate this exponential mixing matrix readily in any order of the expansion in the small parameter {lambda}. The generation of new unitary parametric representations of the mixing matrix with the aid ofmore » the exponential matrix is demonstrated.« less

  5. Analysis of projectile motion: A comparative study using fractional operators with power law, exponential decay and Mittag-Leffler kernel

    NASA Astrophysics Data System (ADS)

    Gómez-Aguilar, J. F.; Escobar-Jiménez, R. F.; López-López, M. G.; Alvarado-Martínez, V. M.

    2018-03-01

    In this paper, the two-dimensional projectile motion was studied; for this study two cases were considered, for the first one, we considered that there is no air resistance and, for the second case, we considered a resisting medium k . The study was carried out by using fractional calculus. The solution to this study was obtained by using fractional operators with power law, exponential decay and Mittag-Leffler kernel in the range of γ \\in (0,1] . These operators were considered in the Liouville-Caputo sense to use physical initial conditions with a known physical interpretation. The range and the maximum height of the projectile were obtained using these derivatives. With the aim of exploring the validity of the obtained results, we compared our results with experimental data given in the literature. A multi-objective particle swarm optimization approach was used for generating Pareto-optimal solutions for the parameters k and γ for different fixed values of velocity v0 and angle θ . The results showed some relevant qualitative differences between the use of power law, exponential decay and Mittag-Leffler law.

  6. Shape and Steepness of Toxicological Dose-Response Relationships of Continuous Endpoints

    EPA Science Inventory

    A re-analysis of a large number of historical dose-response data for continuous endpoints indicates that an exponential or a Hill model with four parameters both adequately describe toxicological dose-responses. The four parameters relate to the background response, the potency o...

  7. Using proteomics to study sexual reproduction in angiosperms

    USDA-ARS?s Scientific Manuscript database

    While a relative latecomer to the post-genomics era of functional biology, the application of mass spectrometry-based proteomic analysis has increased exponentially over the past 10 years. Some of this increase is the result of transition of chemists physicists, and mathematicians to the study of ...

  8. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. A Data-Driven Approach to Reverse Engineering Customer Engagement Models: Towards Functional Constructs

    PubMed Central

    de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo

    2014-01-01

    Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The ‘communities’ of questionnaire items that emerge from our community detection method form possible ‘functional constructs’ inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such ‘functional constructs’ suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling. PMID:25036766

  10. Evaluation of regression and neural network models for solar forecasting over different short-term horizons

    DOE PAGES

    Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas

    2018-04-13

    Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less

  11. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  12. Speed-volume relationship and headway distribution analysis of motorcycle (case study: Teuku Nyak Arief Road)

    NASA Astrophysics Data System (ADS)

    Prahara, E.; Prasetya, R. A.

    2018-01-01

    In many developing countries, transportation modes are more varied than the other country. For example, in Jakarta, Indonesia, in some roadway, motorcycle is the most dominant vehicle, with total volume is four times higher than a passenger car. Thus, the traffic characteristic in motorcycle-dominated traffic differs from a common traffic situation. The purpose of this study is to apply the concept and theory developed to analyze motorcycle behaviour under motorcycle-dominated traffic condition. The survey is applied by recording the traffic flow movement of research location at specified time period. The macroscopic characteristic analyzed in this research is a speed-flow relationship based on motorcycle equivalent unit (MCU). Furthermore, a detail microscopic characteristic analyzed that is motorcycle time headway regarding traffic flow. MCU values computed were consists of motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). Those values were calculated 1.00, 6.13 and 10.71 respectively. The speed and volume relationship result is showing a linear regression model with R2 value is 0.58, it can be explained that the correlation between two variables is intermediate. The headway distribution of motorcycle is compatible with the negative exponential distribution which fitted with the proposed theory for a small vehicle such as a motorcycle.

  13. Evaluation of regression and neural network models for solar forecasting over different short-term horizons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas

    Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less

  14. A data-driven approach to reverse engineering customer engagement models: towards functional constructs.

    PubMed

    de Vries, Natalie Jane; Carlson, Jamie; Moscato, Pablo

    2014-01-01

    Online consumer behavior in general and online customer engagement with brands in particular, has become a major focus of research activity fuelled by the exponential increase of interactive functions of the internet and social media platforms and applications. Current research in this area is mostly hypothesis-driven and much debate about the concept of Customer Engagement and its related constructs remains existent in the literature. In this paper, we aim to propose a novel methodology for reverse engineering a consumer behavior model for online customer engagement, based on a computational and data-driven perspective. This methodology could be generalized and prove useful for future research in the fields of consumer behaviors using questionnaire data or studies investigating other types of human behaviors. The method we propose contains five main stages; symbolic regression analysis, graph building, community detection, evaluation of results and finally, investigation of directed cycles and common feedback loops. The 'communities' of questionnaire items that emerge from our community detection method form possible 'functional constructs' inferred from data rather than assumed from literature and theory. Our results show consistent partitioning of questionnaire items into such 'functional constructs' suggesting the method proposed here could be adopted as a new data-driven way of human behavior modeling.

  15. A multilevel model to estimate the within- and the between-center components of the exposure/disease association in the EPIC study.

    PubMed

    Sera, Francesco; Ferrari, Pietro

    2015-01-01

    In a multicenter study, the overall relationship between exposure and the risk of cancer can be broken down into a within-center component, which reflects the individual level association, and a between-center relationship, which captures the association at the aggregate level. A piecewise exponential proportional hazards model with random effects was used to evaluate the association between dietary fiber intake and colorectal cancer (CRC) risk in the EPIC study. During an average follow-up of 11.0 years, 4,517 CRC events occurred among study participants recruited in 28 centers from ten European countries. Models were adjusted by relevant confounding factors. Heterogeneity among centers was modelled with random effects. Linear regression calibration was used to account for errors in dietary questionnaire (DQ) measurements. Risk ratio estimates for a 10 g/day increment in dietary fiber were equal to 0.90 (95%CI: 0.85, 0.96) and 0.85 (0.64, 1.14), at the individual and aggregate levels, respectively, while calibrated estimates were 0.85 (0.76, 0.94), and 0.87 (0.65, 1.15), respectively. In multicenter studies, over a straightforward ecological analysis, random effects models allow information at the individual and ecologic levels to be captured, while controlling for confounding at both levels of evidence.

  16. Multifactor analysis and simulation of the surface runoff and soil infiltration at different slope gradients

    NASA Astrophysics Data System (ADS)

    Huang, J.; Kang, Q.; Yang, J. X.; Jin, P. W.

    2017-08-01

    The surface runoff and soil infiltration exert significant influence on soil erosion. The effects of slope gradient/length (SG/SL), individual rainfall amount/intensity (IRA/IRI), vegetation cover (VC) and antecedent soil moisture (ASM) on the runoff depth (RD) and soil infiltration (INF) were evaluated in a series of natural rainfall experiments in the South of China. RD is found to correlate positively with IRA, IRI, and ASM factors and negatively with SG and VC. RD decreased followed by its increase with SG and ASM, it increased with a further decrease with SL, exhibited a linear growth with IRA and IRI, and exponential drop with VC. Meanwhile, INF exhibits a positive correlation with SL, IRA and IRI and VC, and a negative one with SG and ASM. INF was going up and then down with SG, linearly rising with SL, IRA and IRI, increasing by a logit function with VC, and linearly falling with ASM. The VC level above 60% can effectively lower the surface runoff and significantly enhance soil infiltration. Two RD and INF prediction models, accounting for the above six factors, were constructed using the multiple nonlinear regression method. The verification of those models disclosed a high Nash-Sutcliffe coefficient and low root-mean-square error, demonstrating good predictability of both models.

  17. Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation

    NASA Astrophysics Data System (ADS)

    Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric

    2014-08-01

    In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.

  18. Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas

    PubMed Central

    Philibert, Aurore; Loyce, Chantal; Makowski, David

    2012-01-01

    Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430

  19. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    NASA Astrophysics Data System (ADS)

    Kjellander, Roland

    2006-04-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.

  20. Highly sensitive and selective microRNA detection based on DNA-bio-bar-code and enzyme-assisted strand cycle exponential signal amplification.

    PubMed

    Dong, Haifeng; Meng, Xiangdan; Dai, Wenhao; Cao, Yu; Lu, Huiting; Zhou, Shufeng; Zhang, Xueji

    2015-04-21

    Herein, a highly sensitive and selective microRNA (miRNA) detection strategy using DNA-bio-bar-code amplification (BCA) and Nb·BbvCI nicking enzyme-assisted strand cycle for exponential signal amplification was designed. The DNA-BCA system contains a locked nucleic acid (LNA) modified DNA probe for improving hybridization efficiency, while a signal reported molecular beacon (MB) with an endonuclease recognition site was designed for strand cycle amplification. In the presence of target miRNA, the oligonucleotides functionalized magnetic nanoprobe (MNP-DNA) and gold nanoprobe (AuNP-DNA) with numerous reported probes (RP) can hybridize with target miRNA, respectively, to form a sandwich structure. After sandwich structures were separated from the solution by the magnetic field, the RP were released under high temperature to recognize the MB and cleaved the hairpin DNA to induce the dissociation of RP. The dissociated RP then triggered the next strand cycle to produce exponential fluorescent signal amplification for miRNA detection. Under optimized conditions, the exponential signal amplification system shows a good linear range of 6 orders of magnitude (from 0.3 pM to 3 aM) with limit of detection (LOD) down to 52.5 zM, while the sandwich structure renders the system with high selectivity. Meanwhile, the feasibility of the proposed strategy for cell miRNA detection was confirmed by analyzing miRNA-21 in HeLa lysates. Given the high-performance for miRNA analysis, the strategy has a promising application in biological detection and in clinical diagnosis.

  1. Comparison of Apparent Diffusion Coefficient and Intravoxel Incoherent Motion for Differentiating among Glioblastoma, Metastasis, and Lymphoma Focusing on Diffusion-Related Parameter.

    PubMed

    Shim, Woo Hyun; Kim, Ho Sung; Choi, Choong-Gon; Kim, Sang Joon

    2015-01-01

    Brain tumor cellularity has been assessed by using apparent diffusion coefficient (ADC). However, the ADC value might be influenced by both perfusion and true molecular diffusion, and the perfusion effect on ADC can limit the reliability of ADC in the characterization of tumor cellularity, especially, in hypervascular brain tumors. In contrast, the IVIM technique estimates parameter values for diffusion and perfusion effects separately. The purpose of our study was to compare ADC and IVIM for differentiating among glioblastoma, metastatic tumor, and primary CNS lymphoma (PCNSL) focusing on diffusion-related parameter. We retrospectively reviewed the data of 128 patients with pathologically confirmed glioblastoma (n = 55), metastasis (n = 31), and PCNSL (n = 42) prior to any treatment. Two neuroradiologists independently calculated the maximum IVIM-f (fmax) and minimum IVIM-D (Dmin) by using 16 different b-values with a bi-exponential fitting of diffusion signal decay, minimum ADC (ADCmin) by using 0 and 1000 b-values with a mono-exponential fitting and maximum normalized cerebral blood volume (nCBVmax). The differences in fmax, Dmin, nCBVmax, and ADCmin among the three tumor pathologies were determined by one-way ANOVA with multiple comparisons. The fmax and Dmin were correlated to the corresponding nCBV and ADC using partial correlation analysis, respectively. Using a mono-exponential fitting of diffusion signal decay, the mean ADCmin was significantly lower in PCNSL than in glioblastoma and metastasis. However, using a bi-exponential fitting, the mean Dmin did not significantly differ in the three groups. The mean fmax significantly increased in the glioblastomas (reader 1, 0.103; reader 2, 0.109) and the metastasis (reader 1, 0.105; reader 2, 0.107), compared to the primary CNS lymphomas (reader 1, 0.025; reader 2, 0.023) (P < .001 for each). The correlation between fmax and the corresponding nCBV was highest in glioblastoma group, and the correlation between Dmin and the corresponding ADC was highest in primary CNS lymphomas group. Unlike ADC value derived from a mono-exponential fitting of diffusion signal, diffusion-related parametric value derived from a bi-exponential fitting with separation of perfusion effect doesn't differ among glioblastoma, metastasis, and PCNSL.

  2. Development of A Tsunami Magnitude Scale Based on DART Buoy Data

    NASA Astrophysics Data System (ADS)

    Leiva, J.; Polet, J.

    2016-12-01

    The quantification of tsunami energy has evolved through time, with a number of magnitude and intensity scales employed in the past century. Most of these scales rely on coastal measurements, which may be affected by complexities due to near-shore bathymetric effects and coastal geometries. Moreover, these datasets are generated by tsunami inundation, and thus cannot serve as a means of assessing potential tsunami impact prior to coastal arrival. With the introduction of a network of ocean buoys provided through the Deep-ocean Assessment and Reporting of Tsunamis (DART) project, a dataset has become available that can be exploited to further our current understanding of tsunamis and the earthquakes that excite them. The DART network consists of 39 stations that have produced estimates of sea-surface height as a function of time since 2003, and are able to detect deep ocean tsunami waves. Data collected at these buoys for the past decade reveals that at least nine major tsunami events, such as the 2011 Tohoku and 2013 Solomon Islands events, produced substantial wave amplitudes across a large distance range that can be implemented in a DART data based tsunami magnitude scale. We present preliminary results from the development of a tsunami magnitude scale that follows the methods used in the development of the local magnitude scale by Charles Richter. Analogous to the use of seismic ground motion amplitudes in the calculation of local magnitude, maximum ocean height displacements due to the passage of tsunami waves will be related to distance from the source in a least-squares exponential regression analysis. The regression produces attenuation curves based on the DART data, a site correction term, attenuation parameters, and an amplification factor. Initially, single event based regressions are used to constrain the attenuation parameters. Additional iterations use the parameters of these event-based fits as a starting point to obtain a stable solution, and include the calculation of station corrections, in order to obtain a final amplification factor for each event, which is used to calculate its tsunami magnitude.

  3. Exponential stability of stochastic complex networks with multi-weights based on graph theory

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Chen, Tianrui

    2018-04-01

    In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.

  4. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  5. Web-based application on employee performance assessment using exponential comparison method

    NASA Astrophysics Data System (ADS)

    Maryana, S.; Kurnia, E.; Ruyani, A.

    2017-02-01

    Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.

  6. Colloquium: Statistical mechanics of money, wealth, and income

    NASA Astrophysics Data System (ADS)

    Yakovenko, Victor M.; Rosser, J. Barkley, Jr.

    2009-10-01

    This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.

  7. On Using Exponential Parameter Estimators with an Adaptive Controller

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbes, G.B.; Drenick, E.J.

    An analysis of the change in total body nitrogen during fasting shows that it declines exponentially, a small fraction being lost rapidly (t/sub 1/2/ of a few days), and the remainder being lost slowly (t/sub 1/2/ of many months). The obese faster loses N, and weight, at a slower relative rate than the nonobese; and the ratio of N loss to weight loss during an extended fast is inversely related to body fat content, being about 20 g/kg in the nonobese and about 10 g/kg in those with body fat burdens of 50 kg or more. The loss of bodymore » N on a low protein-calorie adequate diet can also be described in exponential terms, and this function allows an estimate to be made of the N requirement.« less

  9. Laser induced fluorescence lifetime characterization of Bacillus endospore species using time correlated single photon counting analysis with the multi-exponential fit method

    NASA Astrophysics Data System (ADS)

    Smith, Clint; Edwards, Jarrod; Fisher, Andmorgan

    2010-04-01

    Rapid detection of biological material is critical for determining presence/absence of bacterial endospores within various investigative programs. Even more critical is that if select material tests positive for bacillus endospores then tests should provide data at the species level. Optical detection of microbial endospore formers such as Bacillus sp. can be heavy, cumbersome, and may only identify at the genus level. Data provided from this study will aid in characterization needed by future detection systems for further rapid breakdown analysis to gain insight into a more positive signature collection of Bacillus sp. Literature has shown that fluorescence spectroscopy of endospores could be statistically separated from other vegetative genera, but could not be separated among one another. Results of this study showed endospore species separation is possible using laser-induce fluorescence with lifetime decay analysis for Bacillus endospores. Lifetime decays of B. subtilis, B. megaterium, B. coagulans, and B. anthracis Sterne strain were investigated. Using the Multi-Exponential fit method data showed three distinct lifetimes for each species within the following ranges, 0.2-1.3 ns; 2.5-7.0 ns; 7.5-15.0 ns, when laser induced at 307 nm. The four endospore species were individually separated using principle component analysis (95% CI).

  10. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  11. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  12. LIVESTOCK ACTIVITY AND CHIHUAHUAN DESERT ANNUAL-PLANT COMMUNITIES: BOUNDARY ANALYSIS OF DISTURBANCE GRADIENTS

    EPA Science Inventory

    The impact of domestic livestock on soil properties and perennial vegetation is greatest close to water points and generally decreases exponentially with distance from water. We hypothesized that the impact of livestock on annual-plant communities would be similar to that on per...

  13. Moving Average Models with Bivariate Exponential and Geometric Distributions.

    DTIC Science & Technology

    1985-03-01

    ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28

  14. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    USDA-ARS?s Scientific Manuscript database

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  15. Time series trends of the safety effects of pavement resurfacing.

    PubMed

    Park, Juneyoung; Abdel-Aty, Mohamed; Wang, Jung-Han

    2017-04-01

    This study evaluated the safety performance of pavement resurfacing projects on urban arterials in Florida using the observational before and after approaches. The safety effects of pavement resurfacing were quantified in the crash modification factors (CMFs) and estimated based on different ranges of heavy vehicle traffic volume and time changes for different severity levels. In order to evaluate the variation of CMFs over time, crash modification functions (CMFunctions) were developed using nonlinear regression and time series models. The results showed that pavement resurfacing projects decrease crash frequency and are found to be more safety effective to reduce severe crashes in general. Moreover, the results of the general relationship between the safety effects and time changes indicated that the CMFs increase over time after the resurfacing treatment. It was also found that pavement resurfacing projects for the urban roadways with higher heavy vehicle volume rate are more safety effective than the roadways with lower heavy vehicle volume rate. Based on the exploration and comparison of the developed CMFucntions, the seasonal autoregressive integrated moving average (SARIMA) and exponential functional form of the nonlinear regression models can be utilized to identify the trend of CMFs over time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    PubMed

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  17. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  18. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  19. The Secular Evolution Of Disc Galaxies And The Origin Of Exponential And Double Exponential Surface Density Profiles

    NASA Astrophysics Data System (ADS)

    Elmegreen, Bruce G.

    2016-10-01

    Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.

  20. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  1. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  2. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    ERIC Educational Resources Information Center

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  3. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  4. Determination of temperature and residual laser energy on film fiber-optic thermal converter for diode laser surgery.

    PubMed

    Liu, Weichao; Kong, Yaqun; Shi, Xiafei; Dong, Xiaoxi; Wang, Hong; Zhao, Jizhi; Li, Yingxin

    2017-12-01

    The diode laser was utilized in soft tissue incision of oral surgery based on the photothermic effect. The contradiction between the ablation efficiency and the thermal damage has always been in diode laser surgery, due to low absorption of its radiation in the near infrared region by biological tissues. Fiber-optic thermal converters (FOTCs) were used to improve efficiency for diode laser surgery. The purpose of this study was to determine the photothermic effect by the temperature and residual laser energy on film FOTCs. The film FOTC was made by a distal end of optical fiber impacting on paper. The external surface of the converter is covered by a film contained amorphous carbon. The diode laser with 810 nm worked at the different rated power of 1.0 W, 1.5 W, 2.0 W, 3.0 W, 4.0 W, 5.0 W, 6.0 W, 7.0 W, 8.0 W in continuous wave (CW)and pulse mode. The temperature of the distal end of optical fiber was recorded and the power of the residual laser energy from the film FOTC was measured synchronously. The temperature, residual power and the output power were analyzed by linear or exponential regression model and Pearson correlations analysis. The residual power has good linearity versus output power in CW and pulse modes (R 2  = 0.963, P < 0.01 for both). The temperature on film FOTCs increases exponentially with adjusted R 2  = 0.959 in continuous wave mode, while in pulsed mode with adjusted R 2  = 0.934. The temperature was elevated up to about 210 °C and eventually to be a stable state. Film FOTCs centralized approximately 50% of laser energy on the fiber tip both in CW and pulsed mode while limiting the ability of the laser light to interact directly with target tissue. Film FOTCs can concentrate part of laser energy transferred to heat on distal end of optical fiber, which have the feasibility of improving efficiency and reducing thermal damage of deep tissue.

  5. Multi-time series RNA-seq analysis of Enterobacter lignolyticus SCF1 during growth in lignin-amended medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng

    The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, midexponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less

  6. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  7. Asymmetrical flow field-flow fractionation coupled with multiple detections: A complementary approach in the characterization of egg yolk plasma.

    PubMed

    Dou, Haiyang; Li, Yueqiu; Choi, Jaeyeong; Huo, Shuying; Ding, Liang; Shen, Shigang; Lee, Seungho

    2016-09-23

    The capability of asymmetrical flow field-flow fractionation (AF4) coupled with UV/VIS, multiangle light scattering (MALS) and quasi-elastic light scattering (QELS) (AF4-UV-MALS-QELS) for separation and characterization of egg yolk plasma was evaluated. The accuracy of hydrodynamic radius (Rh) obtained from QELS and AF4 theory (using both simplified and full expression of AF4 retention equations) was discussed. The conformation of low density lipoprotein (LDL) and its aggregates in egg yolk plasma was discussed based on the ratio of radius of gyration (Rg) to Rh together with the results from bio-transmission electron microscopy (Bio-TEM). The results indicate that the full retention equation is more relevant than simplified version for the Rh determination at high cross flow rate. The Rh from online QELS is reliable only at a specific range of sample concentration. The effect of programmed cross flow rate (linear and exponential decay) on the analysis of egg yolk plasma was also investigated. It was found that the use of an exponentially decaying cross flow rate not only reduces the AF4 analysis time of the egg yolk plasma, but also provides better resolution than the use of either a constant or linearly decaying cross flow rate. A combination of an exponentially decaying cross flow AF4-UV-MALS-QELS and the utilization of full retention equation was proved to be a useful method for the separation and characterization of egg yolk plasma. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Asymmetric exponential amplification reaction on a toehold/biotin featured template: an ultrasensitive and specific strategy for isothermal microRNAs analysis

    PubMed Central

    Chen, Jun; Zhou, Xueqing; Ma, Yingjun; Lin, Xiulian; Dai, Zong; Zou, Xiaoyong

    2016-01-01

    The sensitive and specific analysis of microRNAs (miRNAs) without using a thermal cycler instrument is significant and would greatly facilitate biological research and disease diagnostics. Although exponential amplification reaction (EXPAR) is the most attractive strategy for the isothermal analysis of miRNAs, its intrinsic limitations of detection efficiency and inevitable non-specific amplification critically restrict its use in analytical sensitivity and specificity. Here, we present a novel asymmetric EXPAR based on a new biotin/toehold featured template. A biotin tag was used to reduce the melting temperature of the primer/template duplex at the 5′ terminus of the template, and a toehold exchange structure acted as a filter to suppress the non-specific trigger of EXPAR. The asymmetric EXPAR exhibited great improvements in amplification efficiency and specificity as well as a dramatic extension of dynamic range. The limit of detection for the let-7a analysis was decreased to 6.02 copies (0.01 zmol), and the dynamic range was extended to 10 orders of magnitude. The strategy enabled the sensitive and accurate analysis of let-7a miRNA in human cancer tissues with clearly better precision than both standard EXPAR and RT-qPCR. Asymmetric EXPAR is expected to have an important impact on the development of simple and rapid molecular diagnostic applications for short oligonucleotides. PMID:27257058

  9. W-transform for exponential stability of second order delay differential equations without damping terms.

    PubMed

    Domoshnitsky, Alexander; Maghakyan, Abraham; Berezansky, Leonid

    2017-01-01

    In this paper a method for studying stability of the equation [Formula: see text] not including explicitly the first derivative is proposed. We demonstrate that although the corresponding ordinary differential equation [Formula: see text] is not exponentially stable, the delay equation can be exponentially stable.

  10. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  11. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  12. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  13. Lambert-Beer law in ocean waters: optical properties of water and of dissolved/suspended material, optical energy budgets.

    PubMed

    Stavn, R H

    1988-01-15

    The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.

  14. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  15. Filtering of Discrete-Time Switched Neural Networks Ensuring Exponential Dissipative and $l_{2}$ - $l_{\\infty }$ Performances.

    PubMed

    Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg

    2017-10-01

    This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.

  16. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  17. On the q-type distributions

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2007-04-01

    Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.

  18. DWI-associated entire-tumor histogram analysis for the differentiation of low-grade prostate cancer from intermediate-high-grade prostate cancer.

    PubMed

    Wu, Chen-Jiang; Wang, Qing; Li, Hai; Wang, Xiao-Ning; Liu, Xi-Sheng; Shi, Hai-Bin; Zhang, Yu-Dong

    2015-10-01

    To investigate diagnostic efficiency of DWI using entire-tumor histogram analysis in differentiating the low-grade (LG) prostate cancer (PCa) from intermediate-high-grade (HG) PCa in comparison with conventional ROI-based measurement. DW images (b of 0-1400 s/mm(2)) from 126 pathology-confirmed PCa (diameter >0.5 cm) in 110 patients were retrospectively collected and processed by mono-exponential model. The measurement of tumor apparent diffusion coefficients (ADCs) was performed with using histogram-based and ROI-based approach, respectively. The diagnostic ability of ADCs from two methods for differentiating LG-PCa (Gleason score, GS ≤ 6) from HG-PCa (GS > 6) was determined by ROC regression, and compared by McNemar's test. There were 49 LG-tumor and 77 HG-tumor at pathologic findings. Histogram-based ADCs (mean, median, 10th and 90th) and ROI-based ADCs (mean) showed dominant relationships with ordinal GS of Pca (ρ = -0.225 to -0.406, p < 0.05). All above imaging indices reflected significant difference between LG-PCa and HG-PCa (all p values <0.01). Histogram 10th ADCs had dominantly high Az (0.738), Youden index (0.415), and positive likelihood ratio (LR+, 2.45) in stratifying tumor GS against mean, median and 90th ADCs, and ROI-based ADCs. Histogram mean, median, and 10th ADCs showed higher specificity (65.3%-74.1% vs. 44.9%, p < 0.01), but lower sensitivity (57.1%-71.3% vs. 84.4%, p < 0.05) than ROI-based ADCs in differentiating LG-PCa from HG-PCa. DWI-associated histogram analysis had higher specificity, Az, Youden index, and LR+ for differentiation of PCa Gleason grade than ROI-based approach.

  19. Increased intra-individual reaction time variability in cocaine-dependent subjects: role of cocaine-related cues.

    PubMed

    Liu, Shijing; Lane, Scott D; Schmitz, Joy M; Green, Charles E; Cunningham, Kathryn A; Moeller, F Gerard

    2012-02-01

    Neuroimaging data suggest that impaired performance on response inhibition and information processing tests in cocaine-dependent subjects is related to prefrontal and frontal cortical dysfunction and that dysfunction in these brain areas may underlie some aspects of cocaine addiction. In subjects with attention-deficit hyperactivity disorder and other psychiatric disorders, the Intra-Individual Reaction Time Variability (IIRTV) has been associated with frontal cortical dysfunction. In the present study, we evaluated IIRTV parameters in cocaine-dependent subjects vs. controls using a cocaine Stroop task. Fifty control and 123 cocaine-dependent subjects compiled from three studies completed a cocaine Stroop task. Standard deviation (SD) and coefficient of variation (CV) for reaction times (RT) were calculated for both trials with neutral and trials with cocaine-related words. The parameters mu, sigma, and tau were calculated using an ex-Gaussian analysis employed to characterize variability in RTs. The ex-Gaussian analysis divides the RTs into normal (mu, sigma) and exponential (tau) components. Using robust regression analysis, cocaine-dependent subjects showed greater SD, CV and Tau on trials with cocaine-related words compared to controls (p<0.05). However, in trials with neutral words, there was no evidence of group differences in any IIRTV parameters (p>0.05). The Wilcoxon matched-pairs signed-rank test showed that for cocaine-dependent subjects, both SD and tau were larger in trials with cocaine-related words than in trials with neutral words (p<0.05). The observation that only cocaine-related words increased IIRTV in cocaine-dependent subjects suggests that cocaine-related stimuli might disrupt information processing subserved by prefrontal and frontal cortical circuits. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Using Dominance Analysis to Determine Predictor Importance in Logistic Regression

    ERIC Educational Resources Information Center

    Azen, Razia; Traxel, Nicole

    2009-01-01

    This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…

  1. A method of examining the structure and topological properties of public-transport networks

    NASA Astrophysics Data System (ADS)

    Dimitrov, Stavri Dimitri; Ceder, Avishai (Avi)

    2016-06-01

    This work presents a new method of examining the structure of public-transport networks (PTNs) and analyzes their topological properties through a combination of computer programming, statistical data and large-network analyses. In order to automate the extraction, processing and exporting of data, a software program was developed allowing to extract the needed data from General Transit Feed Specification, thus overcoming difficulties occurring in accessing and collecting data. The proposed method was applied to a real-life PTN in Auckland, New Zealand, with the purpose of examining whether it showed characteristics of scale-free networks and exhibited features of ;small-world; networks. As a result, new regression equations were derived analytically describing observed, strong, non-linear relationships among the probabilities of randomly chosen stops in the PTN to be serviced by a given number of routes. The established dependence is best fitted by an exponential rather than a power-law function, showing that the PTN examined is neither random nor scale-free, but a mixture of the two. This finding explains the presence of hubs that are not typical of exponential networks and simultaneously not highly connected to the other nodes as is the case with scale-free networks. On the other hand, the observed values of the topological properties of the network show that although it is highly clustered, owing to its representation as a directed graph, it differs slightly from ;small-world; networks, which are characterized by strong clustering and a short average path length.

  2. Investigation of polycyclic aromatic hydrocarbon content in fly ash and bottom ash of biomass incineration plants in relation to the operating temperature and unburned carbon content.

    PubMed

    Košnář, Zdeněk; Mercl, Filip; Perná, Ivana; Tlustoš, Pavel

    2016-09-01

    The use of biomass fuels in incineration power plants is increasing worldwide. The produced ashes may pose a serious threat to the environment due to the presence of polycyclic aromatic hydrocarbons (PAHs), because some PAHs are potent carcinogens, mutagens and teratogens. The objective of this study was to investigate the content of total and individual PAHs in fly and bottom ash derived from incineration of phytomass and dendromass, because the data on PAH content in biomass ashes is limited. Various operating temperatures of incineration were examined and the relationship between total PAH content and unburned carbon in ashes was also considered. The analysis of PAHs was carried out in fly and bottom ash samples collected from various biomass incineration plants. PAH determination was performed using gas chromatography coupled with mass spectrometry. The correlations between the low, medium and high molecular weight PAHs and each other in ashes were conducted. The relationship between PAH content and unburned carbon, determined as a loss on ignition (L.O.I.) in biomass ashes, was performed using regression analysis. The PAH content in biomass ashes varied from 41.1±1.8 to 53,800.9±13,818.4ng/g dw. This variation may be explained by the differences in boiler operating conditions and biomass fuel composition. The correlation coefficients for PAHs in ash ranged from 0.8025 to 0.9790. The regression models were designed and the coefficients of determination varied from 0.908 to 0.980. The PAH content in ash varied widely with fuel type and the effect of operating temperature on PAH content in ash was evident. Fly ashes contained higher amounts of PAHs than bottom ashes. The low molecular weight PAHs prevailed in tested ashes. The exponential relationship between the PAH content and L.O.I. for fly ashes and the linear for bottom ashes was observed. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    DTIC Science & Technology

    2016-06-22

    this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between

  4. Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.

    PubMed

    Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles

    2009-01-01

    Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.

  5. Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth

    ERIC Educational Resources Information Center

    Castillo-Garsow, Carlos

    2010-01-01

    Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…

  6. Dual exponential polynomials and linear differential equations

    NASA Astrophysics Data System (ADS)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  7. On the Matrix Exponential Function

    ERIC Educational Resources Information Center

    Hou, Shui-Hung; Hou, Edwin; Pang, Wan-Kai

    2006-01-01

    A novel and simple formula for computing the matrix exponential function is presented. Specifically, it can be used to derive explicit formulas for the matrix exponential of a general matrix A satisfying p(A) = 0 for a polynomial p(s). It is ready for use in a classroom and suitable for both hand as well as symbolic computation.

  8. Review of "Going Exponential: Growing the Charter School Sector's Best"

    ERIC Educational Resources Information Center

    Garcia, David

    2011-01-01

    This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…

  9. Reduced Heme Levels Underlie the Exponential Growth Defect of the Shewanella oneidensis hfq Mutant

    PubMed Central

    Mezoian, Taylor; Hunt, Taylor M.; Keane, Meaghan L.; Leonard, Jessica N.; Scola, Shelby E.; Beer, Emma N.; Perdue, Sarah; Pellock, Brett J.

    2014-01-01

    The RNA chaperone Hfq fulfills important roles in small regulatory RNA (sRNA) function in many bacteria. Loss of Hfq in the dissimilatory metal reducing bacterium Shewanella oneidensis strain MR-1 results in slow exponential phase growth and a reduced terminal cell density at stationary phase. We have found that the exponential phase growth defect of the hfq mutant in LB is the result of reduced heme levels. Both heme levels and exponential phase growth of the hfq mutant can be completely restored by supplementing LB medium with 5-aminolevulinic acid (5-ALA), the first committed intermediate synthesized during heme synthesis. Increasing expression of gtrA, which encodes the enzyme that catalyzes the first step in heme biosynthesis, also restores heme levels and exponential phase growth of the hfq mutant. Taken together, our data indicate that reduced heme levels are responsible for the exponential growth defect of the S. oneidensis hfq mutant in LB medium and suggest that the S. oneidensis hfq mutant is deficient in heme production at the 5-ALA synthesis step. PMID:25356668

  10. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  11. Stretched-to-compressed-exponential crossover observed in the electrical degradation kinetics of some spinel-metallic screen-printed structures

    NASA Astrophysics Data System (ADS)

    Balitska, V.; Shpotyuk, O.; Brunner, M.; Hadzaman, I.

    2018-02-01

    Thermally-induced (170 °C) degradation-relaxation kinetics is examined in screen-printed structures composed of spinel Cu0.1Ni0.1Co1.6Mn1.2O4 ceramics with conductive Ag or Ag-Pd layered electrodes. Structural inhomogeneities due to Ag and Ag-Pd diffusants in spinel phase environment play a decisive role in non-exponential kinetics of negative relative resistance drift. If Ag migration in spinel is inhibited by Pd addition due to Ag-Pd alloy, the kinetics attains stretched exponential behavior with ∼0.58 exponent, typical for one-stage diffusion in structurally-dispersive media. Under deep Ag penetration into spinel ceramics, as for thick films with Ag-layered electrodes, the degradation kinetics drastically changes, attaining features of two-step diffusing process governed by compressed-exponential dependence with power index of ∼1.68. Crossover from stretched- to compressed-exponential kinetics in spinel-metallic structures is mapped on free energy landscape of non-barrier multi-well system under strong perturbation from equilibrium, showing transition with a character downhill scenario resulting in faster than exponential decaying.

  12. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  13. Software and Critical Technology Protection Against Side-Channel Analysis Through Dynamic Hardware Obfuscation

    DTIC Science & Technology

    2011-03-01

    resampling a second time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 70 Plot of RSA bitgroup exponentiation with DAILMOM after a...14 DVFS Dynamic Voltage and Frequency Switching . . . . . . . . . . . . . . . . . . . 14 MDPL Masked Dual-Rail...algorithms to prevent whole-sale discovery of PINs and other simple methods to prevent employee tampering [5]. In time , cryptographic systems have

  14. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  15. A Numbers Game: Two Case Studies in Teaching Data Journalism

    ERIC Educational Resources Information Center

    Treadwell, Greg; Ross, Tara; Lee, Allan; Lowenstein, Jeff Kelly

    2016-01-01

    Influenced by the practices of social scientists, data journalists seek to create stories that frame social reality through quantitative data analysis. While the use of statistics by journalists is not new, exponential growth in available data and a desire for source material unmediated by political and public-relations framings have seen data…

  16. Oscillatory singular integrals and harmonic analysis on nilpotent groups

    PubMed Central

    Ricci, F.; Stein, E. M.

    1986-01-01

    Several related classes of operators on nilpotent Lie groups are considered. These operators involve the following features: (i) oscillatory factors that are exponentials of imaginary polynomials, (ii) convolutions with singular kernels supported on lower-dimensional submanifolds, (iii) validity in the general context not requiring the existence of dilations that are automorphisms. PMID:16593640

  17. Higher Education Faculty Utilization of Online Technological Tools: A Multilevel Analysis

    ERIC Educational Resources Information Center

    Jackson, Brianne L.

    2017-01-01

    As online learning and the use of online technological tools in higher education continues to grow exponentially, higher education faculty are expected to incorporate these tools into their instruction. However, many faculty members are reluctant to embrace such tools, for a variety of professional and personal reasons. This study employs survey…

  18. Content Analysis of Language-Promoting Teaching Strategies Used in Infant-Directed Media

    ERIC Educational Resources Information Center

    Vaala, Sarah E.; Linebarger, Deborah L.; Fenstermacher, Susan K.; Tedone, Ashley; Brey, Elizabeth; Barr, Rachel; Moses, Annie; Shwery, Clay E.; Calvert, Sandra L.

    2010-01-01

    The number of videos produced specifically for infants and toddlers has grown exponentially in the last decade. Many of these products make educational claims regarding young children's language development. This study explores infant media producer claims regarding language development, and the extent to which these claims reflect different…

  19. Facilities Management in Higher Education: Doing More with Less.

    ERIC Educational Resources Information Center

    Casey, John M.

    This analysis looked at higher education facilities management that, despite exponential growth in responsibilities since the 1960s, has seen reduced resources for operations and maintenance. By extrapolating 1988 data from the National Center for Education Statistics, the review estimated that there are now 3.4 billion square feet of higher…

  20. The Prediction of Teacher Turnover Employing Time Series Analysis.

    ERIC Educational Resources Information Center

    Costa, Crist H.

    The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…

  1. Informed Conjecturing of Solutions for Differential Equations in a Modeling Context

    ERIC Educational Resources Information Center

    Winkel, Brian

    2015-01-01

    We examine two differential equations. (i) first-order exponential growth or decay; and (ii) second order, linear, constant coefficient differential equations, and show the advantage of learning differential equations in a modeling context for informed conjectures of their solution. We follow with a discussion of the complete analysis afforded by…

  2. Model-based analysis of multi-shell diffusion MR data for tractography: How to get over fitting problems

    PubMed Central

    Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ

    2012-01-01

    In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356

  3. Profiles of lead in urban dust and the effect of the distance to multi-industry in an old heavy industry city in China.

    PubMed

    Yu, Yang; Li, Yingxia; Li, Ben; Shen, Zhenyao; Stenstrom, Michael K

    2017-03-01

    Lead (Pb) concentration in urban dust is often higher than background concentrations and can result in a wide range of health risks to local communities. To understand Pb distribution in urban dust and how multi-industrial activity affects Pb concentration, 21 sampling sites within the heavy industry city of Jilin, China, were analyzed for Pb concentration. Pb concentrations of all 21 urban dust samples from the Jilin City Center were higher than the background concentration for soil in Jilin Province. The analyses show that distance to industry is an important parameter determining health risks associated with Pb in urban dust. The Pb concentration showed an exponential decrease, with increasing distance from industry. Both maximum likelihood estimation and Bayesian analysis were used to estimate the exponential relationship between Pb concentration and distance to multi-industry areas. We found that Bayesian analysis was a better method with less uncertainty for estimating Pb dust concentrations based on their distance to multi-industry, and this approach is recommended for further study. Copyright © 2016. Published by Elsevier Inc.

  4. Optimal Disturbances in Boundary Layers Subject to Streamwise Pressure Gradient

    NASA Technical Reports Server (NTRS)

    Tumin, Anatoli; Ashpis, David E.

    2003-01-01

    Laminar-turbulent transition in shear flows is still an enigma in the area of fluid mechanics. The conventional explanation of the phenomenon is based on the instability of the shear flow with respect to infinitesimal disturbances. The conventional hydrodynamic stability theory deals with the analysis of normal modes that might be unstable. The latter circumstance is accompanied by an exponential growth of the disturbances that might lead to laminar-turbulent transition. Nevertheless, in many cases, the transition scenario bypasses the exponential growth stage associated with the normal modes. This type of transition is called bypass transition. An understanding of the phenomenon has eluded us to this day. One possibility is that bypass transition is associated with so-called algebraic (non-modal) growth of disturbances in shear flows. In the present work, an analysis of the optimal disturbances/streamwise vortices associated with the transient growth mechanism is performed for boundary layers in the presence of a streamwise pressure gradient. The theory will provide the optimal spacing of the control elements in the spanwise direction and their placement in the streamwise direction.

  5. Dynamical analysis for a scalar-tensor model with kinetic and nonminimal couplings

    NASA Astrophysics Data System (ADS)

    Granda, L. N.; Jimenez, D. F.

    We study the autonomous system for a scalar-tensor model of dark energy with nonminimal coupling to curvature and nonminimal kinetic coupling to the Einstein tensor. The critical points describe important stable asymptotic scenarios including quintessence, phantom and de Sitter attractor solutions. Two functional forms for the coupling functions and the scalar potential were considered: power-law and exponential functions of the scalar field. For power-law couplings, the restrictions on stable quintessence and phantom solutions lead to asymptotic freedom regime for the gravitational interaction. For the exponential functions, the stable quintessence, phantom or de Sitter solutions allow asymptotic behaviors where the effective Newtonian coupling can reach either the asymptotic freedom regime or constant value. The phantom solutions could be realized without appealing to ghost degrees of freedom. Transient inflationary and radiation dominated phases can also be described.

  6. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  7. Thermodynamics and kinetics of the sulfation of porous calcium silicate

    NASA Technical Reports Server (NTRS)

    Miller, R. A.; Kohl, F. J.

    1981-01-01

    The sulfation of plasma sprayed calcium silicate in flowing SO2/air mixtures at 900 and 1000 C was investigated thermogravimetrically. Reaction products were analyzed using electron microprobe and X-ray diffraction analysis techniques, and results were compared with thermodynamic predictions. The percentage, by volume, of SO2 in air was varied between 0.036 and 10 percent. At 10 percent SO2 the weight gain curve displays a concave downward shoulder early in the sulfation process. An analytical model was developed which treats the initial process as one which decays exponentially with increasing time and the subsequent process as one which decays exponentially with increasing weight gain. At lower SO2 levels the initial rate is controlled by the reactant flow rate. At 1100 C and 0.036 percent SO2 there is no reaction, in agreement with thermodynamic predictions.

  8. Empirical analysis of individual popularity and activity on an online music service system

    NASA Astrophysics Data System (ADS)

    Hu, Hai-Bo; Han, Ding-Yi

    2008-10-01

    Quantitative understanding of human behaviors supplies basic comprehension of the dynamics of many socio-economic systems. Based on the log data of an online music service system, we investigate the statistical characteristics of individual activity and popularity, and find that the distributions of both of them follow a stretched exponential form which interpolates between exponential and power law distribution. We also study the human dynamics on the online system and find that the distribution of interevent time between two consecutive listenings of music shows the fat tail feature. Besides, with the reduction of user activity the fat tail becomes more and more irregular, indicating different behavior patterns for users with diverse activities. The research results may shed some light on the in-depth understanding of collective behaviors in socio-economic systems.

  9. The dynamics of photoinduced defect creation in amorphous chalcogenides: The origin of the stretched exponential function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193

    The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.

  10. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  11. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  12. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  13. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  14. Analysis of non-destructive current simulators of flux compression generators.

    PubMed

    O'Connor, K A; Curry, R D

    2014-06-01

    Development and evaluation of power conditioning systems and high power microwave components often used with flux compression generators (FCGs) requires repeated testing and characterization. In an effort to minimize the cost and time required for testing with explosive generators, non-destructive simulators of an FCG's output current have been developed. Flux compression generators and simulators of FCGs are unique pulsed power sources in that the current waveform exhibits a quasi-exponential increasing rate at which the current rises. Accurately reproducing the quasi-exponential current waveform of a FCG can be important in designing electroexplosive opening switches and other power conditioning components that are dependent on the integral of current action and the rate of energy dissipation. Three versions of FCG simulators have been developed that include an inductive network with decreasing impedance in time. A primary difference between these simulators is the voltage source driving them. It is shown that a capacitor-inductor-capacitor network driving a constant or decreasing inductive load can produce the desired high-order derivatives of the load current to replicate a quasi-exponential waveform. The operation of the FCG simulators is reviewed and described mathematically for the first time to aid in the design of new simulators. Experimental and calculated results of two recent simulators are reported with recommendations for future designs.

  15. Validation of Reference Genes for Real-Time Quantitative PCR (qPCR) Analysis of Avibacterium paragallinarum.

    PubMed

    Wen, Shuxiang; Chen, Xiaoling; Xu, Fuzhou; Sun, Huiling

    2016-01-01

    Real-time quantitative reverse transcription PCR (qRT-PCR) offers a robust method for measurement of gene expression levels. Selection of reliable reference gene(s) for gene expression study is conducive to reduce variations derived from different amounts of RNA and cDNA, the efficiency of the reverse transcriptase or polymerase enzymes. Until now reference genes identified for other members of the family Pasteurellaceae have not been validated for Avibacterium paragallinarum. The aim of this study was to validate nine reference genes of serovars A, B, and C strains of A. paragallinarum in different growth phase by qRT-PCR. Three of the most widely used statistical algorithms, geNorm, NormFinder and ΔCT method were used to evaluate the expression stability of reference genes. Data analyzed by overall rankings showed that in exponential and stationary phase of serovar A, the most stable reference genes were gyrA and atpD respectively; in exponential and stationary phase of serovar B, the most stable reference genes were atpD and recN respectively; in exponential and stationary phase of serovar C, the most stable reference genes were rpoB and recN respectively. This study provides recommendations for stable endogenous control genes for use in further studies involving measurement of gene expression levels.

  16. Spacecraft Solar Particle Event (SPE) Shielding: Shielding Effectiveness as a Function of SPE model as Determined with the FLUKA Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina

    2010-01-01

    Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event

  17. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  18. Macromolecular Rate Theory (MMRT) Provides a Thermodynamics Rationale to Underpin the Convergent Temperature Response in Plant Leaf Respiration

    NASA Astrophysics Data System (ADS)

    Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.

    2017-12-01

    Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.

  19. Decline of Monarch Butterflies Overwintering in Mexico- Is the Migratory Phenomenon at Risk?

    NASA Technical Reports Server (NTRS)

    Brower, Lincoln; Taylor, Orley R.; Williams, Ernest H.; Slayback, Daniel; Zubieta, Raul R.; Ramirez, M. Isabel

    2012-01-01

    1.During the 2009-2010 overwintering season and following a 15-year downward trend, the total area in Mexico occupied by the eastern North American population of overwintering monarch butterflies reached an all-time low. Despite an increase, it remained low in 2010-2011. 2. Although the data set is small, the decline in abundance is statistically significant using both linear and exponential regression models. 3. Three factors appear to have contributed to reduce monarch abundance: degradation of the forest in the overwintering areas; the loss of breeding habitat in the United States due to the expansion ofGM herbicide-resistant crops, with consequent loss of milkweed host plants, as well as continued land development; and severe weather. 4. This decline calls into question the long-term survival of the monarchs' migratory phenomenon

  20. Dissipation kinetics of bifenazate in tea under tropical conditions.

    PubMed

    Satheshkumar, Annamalai; Senthurpandian, Velu Kalaipandian; Shanmugaselvan, Veilumuthu Anandham

    2014-02-15

    Field experiments were conducted during April and May of 2011 in Valparai, Coonoor and Gudalur (Tamil Nadu, India) to determine the residues of bifenazate in black tea. From this study, residue levels of bifenazate at different harvest intervals, persistence, dissipation pattern during processing, rate constant and half-life values were calculated. Residues of bifenazate dissipated exponentially after spraying and at Gudalur trial, on the 16th day after application residues were below the maximum residue level of 0.02 mg/kg set by the European Union. However, no residues were detected in the tea brew. Regression lines drawn for bifenazate showed that it followed first order dissipation kinetics. Half-life values varied from 1.03 to 1.36 days for bifenazate and a pre-harvest interval of 16 days is suggested. Copyright © 2013 Elsevier Ltd. All rights reserved.

Top