Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
Predicting clicks of PubMed articles.
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.
Predicting clicks of PubMed articles
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386
NASA Astrophysics Data System (ADS)
Matsubara, Yoshitsugu; Musashi, Yasuo
2017-12-01
The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.
Distribution of transvascular pathway sizes through the pulmonary microvascular barrier.
McNamee, J E
1987-01-01
Mathematical models of solute and water exchange in the lung have been helpful in understanding factors governing the volume flow rate and composition of pulmonary lymph. As experimental data and models become more encompassing, parameter identification becomes more difficult. Pore sizes in these models should approach and eventually become equivalent to actual physiological pathway sizes as more complex and accurate models are tried. However, pore sizes and numbers vary from model to model as new pathway sizes are added. This apparent inconsistency of pore sizes can be explained if it is assumed that the pulmonary blood-lymph barrier is widely heteroporous, for example, being composed of a continuous distribution of pathway sizes. The sieving characteristics of the pulmonary barrier are reproduced by a log normal distribution of pathway sizes (log mean = -0.20, log s.d. = 1.05). A log normal distribution of pathways in the microvascular barrier is shown to follow from a rather general assumption about the nature of the pulmonary endothelial junction.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
Improvement of Reynolds-Stress and Triple-Product Lag Models
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Lillard, Randolph P.
2017-01-01
The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.
Stochastic modelling of non-stationary financial assets
NASA Astrophysics Data System (ADS)
Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.
2017-11-01
We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
NASA Astrophysics Data System (ADS)
Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain
2017-07-01
Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
NASA Astrophysics Data System (ADS)
Irawan, R.; Yong, B.; Kristiani, F.
2017-02-01
Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Measuring Resistance to Change at the Within-Session Level
Tonneau, François; Ríos, Américo; Cabrera, Felipe
2006-01-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results. PMID:16903495
Measuring resistance to change at the within-session level.
Tonneau, François; Ríos, Américo; Cabrera, Felipe
2006-07-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results.
Investigation into the performance of different models for predicting stutter.
Bright, Jo-Anne; Curran, James M; Buckleton, John S
2013-07-01
In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evaluation of waste mushroom logs as a potential biomass resource for the production of bioethanol.
Lee, Jae-Won; Koo, Bon-Wook; Choi, Joon-Weon; Choi, Don-Ha; Choi, In-Gyu
2008-05-01
In order to investigate the possibility of using waste mushroom logs as a biomass resource for alternative energy production, the chemical and physical characteristics of normal wood and waste mushroom logs were examined. Size reduction of normal wood (145 kW h/tone) required significantly higher energy consumption than waste mushroom logs (70 kW h/tone). The crystallinity value of waste mushroom logs was dramatically lower (33%) than normal wood (49%) after cultivation by Lentinus edodes as spawn. Lignin, an enzymatic hydrolysis inhibitor in sugar production, decreased from 21.07% to 18.78% after inoculation of L. edodes. Total sugar yields obtained by enzyme and acid hydrolysis were higher in waste mushroom logs than in normal wood. After 24h fermentation, 12 g/L ethanol was produced on waste mushroom logs, while normal wood produced 8 g/L ethanol. These results indicate that waste mushroom logs are economically suitable lignocellulosic material for the production of fermentable sugars related to bioethanol production.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
NASA Astrophysics Data System (ADS)
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric
2007-01-01
A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
NASA Astrophysics Data System (ADS)
Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António
2017-05-01
In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).
Best Statistical Distribution of flood variables for Johor River in Malaysia
NASA Astrophysics Data System (ADS)
Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.
2012-12-01
A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
Fingerprinting breakthrough curves in soils
NASA Astrophysics Data System (ADS)
Koestel, J. K.
2017-12-01
Conservative solute transport through soil is predominantly modeled using a few standard solute transport models like the convection dispersion equation or the mobile-immobile model. The adequacy of these models is seldom investigated in detail as it would require knowledge on the 3-D spatio-temporal evolution of the solute plume that is normally not available. Instead, shape-measures of breakthrough curves (BTCs) such as the apparent dispersivity and the relative 5%-arrival time may be used to fingerprint breakthrough curves as well as forward solutions of solute transport models. In this fashion the similarity of features from measured and modeled BTC data becomes quantifiable. In this study I am presenting a new set of shape-measures that characterize the log-log tailings of BTC. I am using the new shape measures alongside with more established ones to map the features of BTCs obtained forward models of the convective dispersive equation, log-normal and Gamma transfer functions, the mobile-immobile model and the continuous time random walk model with respect to their input parameters. In a second step, I am comparing corresponding shape-measures for 206 measured BTCs extracted from peer-reviewed literature. Preliminary results show that power-law tailings are very common in BTCs from soil samples and that BTC features that are exclusive to a mobile-immobile type solute transport process are very rarely found.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya
2002-04-01
In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).
Statistical analysis of variability properties of the Kepler blazar W2R 1926+42
NASA Astrophysics Data System (ADS)
Li, Yutong; Hu, Shaoming; Wiita, Paul J.; Gupta, Alok C.
2018-04-01
We analyzed Kepler light curves of the blazar W2R 1926+42 that provided nearly continuous coverage from quarter 11 through quarter 17 (589 days between 2011 and 2013) and examined some of their flux variability properties. We investigate the possibility that the light curve is dominated by a large number of individual flares and adopt exponential rise and decay models to investigate the symmetry properties of flares. We found that those variations of W2R 1926+42 are predominantly asymmetric with weak tendencies toward positive asymmetry (rapid rise and slow decay). The durations (D) and the amplitudes (F0) of flares can be fit with log-normal distributions. The energy (E) of each flare is also estimated for the first time. There are positive correlations between logD and logE with a slope of 1.36, and between logF0 and logE with a slope of 1.12. Lomb-Scargle periodograms are used to estimate the power spectral density (PSD) shape. It is well described by a power law with an index ranging between -1.1 and -1.5. The sizes of the emission regions, R, are estimated to be in the range of 1.1 × 1015cm - 6.6 × 1016cm. The flare asymmetry is difficult to explain by a light travel time effect but may be caused by differences between the timescales for acceleration and dissipation of high-energy particles in the relativistic jet. A jet-in-jet model also could produce the observed log-normal distributions.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Fatigue shifts and scatters heart rate variability in elite endurance athletes.
Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire
2013-01-01
This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.
Bengtsson, Henrik; Hössjer, Ola
2006-03-01
Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.
Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.
2018-06-01
An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks
NASA Astrophysics Data System (ADS)
Murray, Steven G.
2018-05-01
powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.
Generating log-normal mock catalog of galaxies in redshift space
NASA Astrophysics Data System (ADS)
Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro
2017-10-01
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
Nguyen, Hoang Anh; Denis, Olivier; Vergison, Anne; Theunis, Anne; Tulkens, Paul M; Struelens, Marc J; Van Bambeke, Françoise
2009-04-01
Small-colony variant (SCV) strains of Staphylococcus aureus show reduced antibiotic susceptibility and intracellular persistence, potentially explaining therapeutic failures. The activities of oxacillin, fusidic acid, clindamycin, gentamicin, rifampin, vancomycin, linezolid, quinupristin-dalfopristin, daptomycin, tigecycline, moxifloxacin, telavancin, and oritavancin have been examined in THP-1 macrophages infected by a stable thymidine-dependent SCV strain in comparison with normal-phenotype and revertant isogenic strains isolated from the same cystic fibrosis patient. The SCV strain grew slowly extracellularly and intracellularly (1- and 0.2-log CFU increase in 24 h, respectively). In confocal and electron microscopy, SCV and the normal-phenotype bacteria remain confined in acid vacuoles. All antibiotics tested, except tigecycline, caused a net reduction in bacterial counts that was both time and concentration dependent. At an extracellular concentration corresponding to the maximum concentration in human serum (total drug), oritavancin caused a 2-log CFU reduction at 24 h; rifampin, moxifloxacin, and quinupristin-dalfopristin caused a similar reduction at 72 h; and all other antibiotics had only a static effect at 24 h and a 1-log CFU reduction at 72 h. In concentration dependence experiments, response to oritavancin was bimodal (two successive plateaus of -0.4 and -3.1 log CFU); tigecycline, moxifloxacin, and rifampin showed maximal effects of -1.1 to -1.7 log CFU; and the other antibiotics produced results of -0.6 log CFU or less. Addition of thymidine restored intracellular growth of the SCV strain but did not modify the activity of antibiotics (except quinupristin-dalfopristin). All drugs (except tigecycline and oritavancin) showed higher intracellular activity against normal or revertant phenotypes than against SCV strains. The data may help rationalizing the design of further studies with intracellular SCV strains.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center, capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.
Normal reference values for bladder wall thickness on CT in a healthy population.
Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T
2018-02-01
To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2 = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Connock, Martin; Hyde, Chris; Moore, David
2011-10-01
The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When trials have survival curves with long tails exhibiting few events then the robustness of extrapolations using information in such tails should be tested.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurtubise, R.J.; Hussain, A.; Silver, H.F.
1981-11-01
The normal-phase liquid chromatographic models of Scott, Snyder, and Soczewinski were considered for a ..mu..-Bondapak NH/sub 2/ stationary phase. n-Heptane:2-propanol and n-heptane:ethyl acetate mobile phases of different compositions were used. Linear relationships were obtained from graphs of log K' vs. log mole fraction of the strong solvent for both n-heptane:2-propanol and n-heptane:ethyl acetate mobile phases. A linear relationship was obtained between the reciprocal of corrected retention volume and % wt/v of 2-propanol but not between the reciprocal of corrected retention volume and % wt/v of ethyl acetate. The slopes and intercept terms from the Snyder and Soczewinski models were foundmore » to approximately describe interactions with ..mu..-Bondapak NH/sub 2/. Capacity factors can be predicted for the compounds by using the equations obtained from mobile phase composition variation experiments.« less
Generating log-normal mock catalog of galaxies in redshift space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Aniket; Makiya, Ryu; Saito, Shun
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less
Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes
Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire
2013-01-01
Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
NASA Astrophysics Data System (ADS)
Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel
2002-12-01
Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.
USDA-ARS?s Scientific Manuscript database
Using linear regression models, we studied the main and two-way interaction effects of the predictor variables gender, age, BMI, and 64 folate/vitamin B-12/homocysteine/lipid/cholesterol-related single nucleotide polymorphisms (SNP) on log-transformed plasma homocysteine normalized by red blood cell...
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
NASA Astrophysics Data System (ADS)
Biteau, J.; Giebels, B.
2012-12-01
Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.
Persiani, Anna Maria; Maggi, Oriana
2013-01-01
Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.
Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R
2017-07-01
We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.
Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka
2016-08-05
Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
A Box-Cox normal model for response times.
Klein Entink, R H; van der Linden, W J; Fox, J-P
2009-11-01
The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.
Electronic neutron sources for compensated porosity well logging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, A. X.; Antolak, A. J.; Leung, K. -N.
2012-08-01
The viability of replacing Americium–Beryllium (Am–Be) radiological neutron sources in compensated porosity nuclear well logging tools with D–T or D–D accelerator-driven neutron sources is explored. The analysis consisted of developing a model for a typical well-logging borehole configuration and computing the helium-3 detector response to varying formation porosities using three different neutron sources (Am–Be, D–D, and D–T). The results indicate that, when normalized to the same source intensity, the use of a D–D neutron source has greater sensitivity for measuring the formation porosity than either an Am–Be or D–T source. The results of the study provide operational requirements that enablemore » compensated porosity well logging with a compact, low power D–D neutron generator, which the current state-of-the-art indicates is technically achievable.« less
NASA Astrophysics Data System (ADS)
Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.
2018-05-01
Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.
NASA Astrophysics Data System (ADS)
Cranganu, Constantin
2007-10-01
Many sedimentary basins throughout the world exhibit areas with abnormal pore-fluid pressures (higher or lower than normal or hydrostatic pressure). Predicting pore pressure and other parameters (depth, extension, magnitude, etc.) in such areas are challenging tasks. The compressional acoustic (sonic) log (DT) is often used as a predictor because it responds to changes in porosity or compaction produced by abnormal pore-fluid pressures. Unfortunately, the sonic log is not commonly recorded in most oil and/or gas wells. We propose using an artificial neural network to synthesize sonic logs by identifying the mathematical dependency between DT and the commonly available logs, such as normalized gamma ray (GR) and deep resistivity logs (REID). The artificial neural network process can be divided into three steps: (1) Supervised training of the neural network; (2) confirmation and validation of the model by blind-testing the results in wells that contain both the predictor (GR, REID) and the target values (DT) used in the supervised training; and 3) applying the predictive model to all wells containing the required predictor data and verifying the accuracy of the synthetic DT data by comparing the back-predicted synthetic predictor curves (GRNN, REIDNN) to the recorded predictor curves used in training (GR, REID). Artificial neural networks offer significant advantages over traditional deterministic methods. They do not require a precise mathematical model equation that describes the dependency between the predictor values and the target values and, unlike linear regression techniques, neural network methods do not overpredict mean values and thereby preserve original data variability. One of their most important advantages is that their predictions can be validated and confirmed through back-prediction of the input data. This procedure was applied to predict the presence of overpressured zones in the Anadarko Basin, Oklahoma. The results are promising and encouraging.
Evaluation of portfolio credit risk based on survival analysis for progressive censored data
NASA Astrophysics Data System (ADS)
Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd
2017-04-01
In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.
Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity
McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.
2011-01-01
Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Tyson, Stephen
2015-04-01
Fracture density and orientation are key parameters controlling productivity of coal seam gas reservoirs. Seismic anisotropy can help to identify and quantify fracture characteristics. In particular, wide offset and dense azimuthal coverage land seismic recordings offers the opportunity for recovery of anisotropy parameters. In many coal seam gas reservoirs (eg. Walloon Subgroup in the Surat Basin, Queensland, Australia (Esterle et al. 2013)) the thickness of coal-beds and interbeds (e.g mud-stone) are well below the seismic wave length (0.3-1m versus 5-15m). In these situations, the observed seismic anisotropy parameters represent effective elastic properties of the composite media formed of fractured, anisotropic coal and isotropic interbed. As a consequence observed seismic anisotropy cannot directly be linked to fracture characteristics but requires a more careful interpretation. In the paper we will discuss techniques to estimate effective seismic anisotropy parameters from well log data with the objective to improve the interpretation for the case of layered thin coal beds. In the first step we use sonic log data to reconstruct the elasticity parameters as function of depth (at the resolution of the sonic log). It is assumed that within a sample fractures are sparse, of the same size and orientation, penny-shaped and equally spaced. Following classical fracture model this can be modeled as an elastic horizontally transversely isotropic (HTI) media (Schoenberg & Sayers 1995). Under the additional assumption of dry fractures, normal and tangential fracture weakness is estimated from slow and fast shear wave velocities of the sonic log. In the second step we apply Backus-style upscaling to construct effective anisotropy parameters on an appropriate length scale. In order to honor the HTI anisotropy present at each layer we have developed a new extension of the classical Backus averaging for layered isotropic media (Backus 1962) . Our new method assumes layered HTI media with constant anisotropy orientation as recovered in the first step. It leads to an effective horizontal orthorhombic elastic model. From this model Thomsen-style anisotropy parameters are calculated to derive azimuth-dependent normal move out (NMO) velocities (see Grechka & Tsvankin 1998). In our presentation we will show results of our approach from sonic well logs in the Surat Basin to investigate the potential of reconstructing S-wave velocity anisotropy and fracture density from azimuth dependent NMO velocities profiles.
Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.
Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J
2017-06-01
Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.
Phenomenology of wall-bounded Newtonian turbulence.
L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S
2006-01-01
We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.
NASA Astrophysics Data System (ADS)
Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea
2016-10-01
We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.
Money-center structures in dynamic banking systems
NASA Astrophysics Data System (ADS)
Li, Shouwei; Zhang, Minghui
2016-10-01
In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less
NASA Astrophysics Data System (ADS)
Abaza, Mohamed; Mesleh, Raed; Mansour, Ali; Aggoune, el-Hadi
2015-01-01
The performance analysis of a multi-hop decode and forward relaying free-space optical (FSO) communication system is presented in this paper. The considered FSO system uses intensity modulation and direct detection as means of transmission and reception. Atmospheric turbulence impacts are modeled as a log-normal channel, and different weather attenuation effects and geometric losses are taken into account. It is shown that multi-hop is an efficient technique to mitigate such effects in FSO communication systems. A comparison with direct link and multiple-input single-output (MISO) systems considering correlation effects at the transmitter is provided. Results show that MISO multi-hop FSO systems are superior than their counterparts over links exhibiting high attenuation. Monte Carlo simulation results are provided to validate the bit error rate (BER) analyses and conclusions.
A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.
2014-12-01
The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
Investigations Regarding Anesthesia during Hypovolemic Conditions.
1982-09-25
i / b ,- 18 For each level of hemoglobin, the equation was "normalized" to a pH of 7.400 for a BE of zero and a PCO of 40.0 torr, Orr et al. (171...the shifted BE values. Curve nomogram. Using the equations resulting from the above curve- fitting procedure, we calculated the relationship between pH...model for a given BE (i.e., pH = m i log PCO 2 + bi). Solve the following set of equations for pHind and log dX - 0 d(PHind) where X = (pHl - pHind) 2
Assessment of the hygienic performances of hamburger patty production processes.
Gill, C O; Rahn, K; Sloan, K; McMullen, L M
1997-05-20
The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Measuring Resistance to Change at the Within-Session Level
ERIC Educational Resources Information Center
Tonneau, Francois; Rios, Americo; Cabrera, Felipe
2006-01-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases…
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
Empirical analysis on the runners' velocity distribution in city marathons
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Meng, Fan
2018-01-01
In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.
Far-infrared properties of cluster galaxies
NASA Technical Reports Server (NTRS)
Bicay, M. D.; Giovanelli, R.
1987-01-01
Far-infrared properties are derived for a sample of over 200 galaxies in seven clusters: A262, Cancer, A1367, A1656 (Coma), A2147, A2151 (Hercules), and Pegasus. The IR-selected sample consists almost entirely of IR normal galaxies, with Log of L(FIR) = 9.79 solar luminosities, Log of L(FIR)/L(B) = 0,79, and Log of S(100 microns)/S(60 microns) = 0.42. None of the sample galaxies has Log of L(FIR) greater than 11.0 solar luminosities, and only one has a FIR-to-blue luminosity ratio greater than 10. No significant differences are found in the FIR properties of HI-deficient and HI-normal cluster galaxies.
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.
1994-06-01
The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
Predicting through-focus visual acuity with the eye's natural aberrations.
Kingston, Amanda C; Cox, Ian G
2013-10-01
To develop a predictive optical modeling process that utilizes individual computer eye models along with a novel through-focus image quality metric. Individual eye models were implemented in optical design software (Zemax, Bellevue, WA) based on evaluation of ocular aberrations, pupil diameter, visual acuity, and accommodative response of 90 subjects (180 eyes; 24-63 years of age). Monocular high-contrast minimum angle of resolution (logMAR) acuity was assessed at 6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, 33 cm, 28 cm, and 25 cm. While the subject fixated on the lowest readable line of acuity, total ocular aberrations and pupil diameter were measured three times each using the Complete Ophthalmic Analysis System (COAS HD VR) at each distance. A subset of 64 mature presbyopic eyes was used to predict the clinical logMAR acuity performance of five novel multifocal contact lens designs. To validate predictability of the design process, designs were manufactured and tested clinically on a population of 24 mature presbyopes (having at least +1.50 D spectacle add at 40 cm). Seven object distances were used in the validation study (6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, and 25 cm) to measure monocular high-contrast logMAR acuity. Baseline clinical through-focus logMAR was shown to correlate highly (R² = 0.85) with predicted logMAR from individual eye models. At all object distances, each of the five multifocal lenses showed less than one line difference, on average, between predicted and clinical normalized logMAR acuity. Correlation showed R² between 0.90 and 0.97 for all multifocal designs. Computer-based models that account for patient's aberrations, pupil diameter changes, and accommodative amplitude can be used to predict the performance of contact lens designs. With this high correlation (R² ≥ 0.90) and high level of predictability, more design options can be explored in the computer to optimize performance before a lens is manufactured and tested clinically.
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Chen, G; Shi, L; Cai, L; Lin, W; Huang, H; Liang, J; Li, L; Lin, L; Tang, K; Chen, L; Lu, J; Bi, Y; Wang, W; Ning, G; Wen, J
2017-02-01
Insulin resistance and β-cell function are different between the young and elderly diabetes individuals, which are not well elaborated in the nondiabetic persons. The aims of this study were to compare insulin resistance and β-cell function between young and old adults from normal glucose tolerance (NGT) to prediabetes [which was subdivided into isolated impaired fasting glucose (i-IFG), isolated impaired glucose tolerance (i-IGT), and a combination of both (IFG/IGT)], and compare the prevalence of diabetes mellitus (DM) in the above prediabetes subgroups between different age groups after 3 years. A total of 1 374 subjects aged below 40 or above 60 years old with NGT or prediabetes were finally included in this study. Insulin resistance and β-cell function from homeostasis model assessment (HOMA) and interactive, 24-variable homeostatic model of assessment (iHOMA2) were compared between different age groups. The rate of transition to diabetes between different age groups in all pre-diabetes subgroups was also compared. Compared with the old groups, young i-IFG and IFG/IGT groups exhibit higher log HOMA-IR and log HOMA2-S, whereas the young i-IGT groups experienced comparable log HOMA-IR and log HOMA2-S when compared with old i-IFG and IFG/IGT groups. Three prediabetes subgroups all had similar log HOMA-B and log HOMA2-B between different age groups. In addition, the prevalence of diabetes in young i-IFG was statistically higher than that in old i-IFG after 3 years. Age is negatively related to log HOMA2-B in both age groups. Considering an age-related deterioration of β-cell function, young i-IFG, young i-IGT, and young IFG/IGT all suffered a greater impairment in insulin secretion than the old groups. Young i-IFG and IFG/IGT have more severe insulin resistance than the old groups. In addition, young i-IFG characterized with a higher incidence of DM than the old i-IFG. These disparities highlight that the prevention to slow progression from prediabetes to type 2 diabetes should be additionally focused in young prediabetes individuals, especially young i-IFG. © Georg Thieme Verlag KG Stuttgart · New York.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less
Frequency distribution of lithium in leaves of Lycium andersonii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romney, E.M.; Wallace, A.; Kinnear, J.
1977-01-01
Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
1993-06-01
1 A. OBJECTIVES ............. .... .................. 1 B. HISTORY ................... .................... 2 C...utilization, and any additional manpower requirements at the "selected" AIMD’s. B. HISTORY Until late 1991 both NADEP JAX and NADEP North Island (NORIS...TRIANGULAR OR ALL LOG NORMAL DISTRIBUTIONS FOR SERVICE TIMES AT AIND CECIL FIELD maintenance/ Triangular Log Normal MAZDA Difference Differe•ce Supply
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Fechner's law: where does the log transform come from?
Laming, Donald
2010-01-01
This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.
Erosion associated with cable and tractor logging in northwestern California
R. M. Rice; P. A. Datzman
1981-01-01
Abstract - Erosion and site conditions were measured at 102 logged plots in northwestern California. Erosion averaged 26.8 m 3 /ha. A log-normal distribution was a better fit to the data. The antilog of the mean of the logarithms of erosion was 3.2 m 3 /ha. The Coast District Erosion Hazard Rating was a poor predictor of erosion related to logging. In a new equation...
Oestereich, Lisa; Rieger, Toni; Lüdtke, Anja; Ruibal, Paula; Wurr, Stephanie; Pallasch, Elisa; Bockholt, Sabrina; Krasemann, Susanne; Muñoz-Fontela, César; Günther, Stephan
2016-03-15
We studied the therapeutic potential of favipiravir (T-705) for Lassa fever, both alone and in combination with ribavirin. Favipiravir suppressed Lassa virus replication in cell culture by 5 log10 units. In a novel lethal mouse model, it lowered the viremia level and the virus load in organs and normalized levels of cell-damage markers. Treatment with 300 mg/kg per day, commenced 4 days after infection, when the viremia level had reached 4 log10 virus particles/mL, rescued 100% of Lassa virus-infected mice. We found a synergistic interaction between favipiravir and ribavirin in vitro and an increased survival rate and extended survival time when combining suboptimal doses in vivo. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH
2009-01-01
Background Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. Results We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. Conclusion The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies. PMID:19925645
Allemeersch, Joke; Van Vooren, Steven; Hannes, Femke; De Moor, Bart; Vermeesch, Joris Robert; Moreau, Yves
2009-11-19
Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies.
Jung, Chan Sik; Koh, Sang-Hyun; Nam, Youngwoo; Ahn, Jeong Joon; Lee, Cha Young; Choi, Won I L
2015-08-01
Monochamus saltuarius Gebler is a vector that transmits the pine wood nematode, Bursaphelenchus xylophilus, to Korean white pine, Pinus koraiensis, in Korea. To reduce the damage caused by this nematode in pine forests, timely control measures are needed to suppress the cerambycid beetle population. This study sought to construct a forecasting model to predict beetle emergence based on spring temperature. Logs of Korean white pine were infested with M. saltuarius in 2009, and the infested logs were overwintered. In February 2010, infested logs were then moved into incubators held at constant temperature conditions of 16, 20, 23, 25, 27, 30 or 34°C until all adults had emerged. The developmental rate of the beetles was estimated by linear and nonlinear equations and a forecasting model for emergence of the beetle was constructed by pooling data based on normalized developmental rate. The lower threshold temperature for development was 8.3°C. The forecasting model relatively well predicted the emergence pattern of M. saltuarius collected from four areas in northern Republic of Korea. The median emergence dates predicted by the model were 2.2-5.9 d earlier than the observed median dates. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin
2017-09-01
In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
Power laws in citation distributions: evidence from Scopus.
Brzezinski, Michal
Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.
NASA Astrophysics Data System (ADS)
Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro
2011-12-01
We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≃10-2.5Mpc-1 with the upper limit B≲3nG.
NASA Astrophysics Data System (ADS)
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2014-05-01
A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.
Grain coarsening in two-dimensional phase-field models with an orientation field
NASA Astrophysics Data System (ADS)
Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László
2017-05-01
In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Weltzien, Ingunn H.
2016-09-01
Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE), parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G), whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN). The two models are implemented in the parameter parsimonious rainfall-runoff model Distance Distribution Dynamics (DDD), and their capability for predicting runoff, SWE and snow-covered area (SCA) is tested and compared for 71 Norwegian catchments. The calibration period is 1985-2000 and validation period is 2000-2014. Results show that SDG better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SDG is slightly inferior, with a reduction in Nash-Sutcliffe and Kling-Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Saxena, Aditi R; Seely, Ellen W; Rich-Edwards, Janet W; Wilkins-Haug, Louise E; Karumanchi, S Ananth; McElrath, Thomas F
2013-04-04
First trimester Pregnancy Associated Plasma Protein A (PAPP-A) levels, routinely measured for aneuploidy screening, may predict development of preeclampsia. This study tests the hypothesis that first trimester PAPP-A levels correlate with soluble fms-like tyrosine kinase-1 (sFlt-1) levels, an angiogenic marker associated with preeclampsia, throughout pregnancy. sFlt-1 levels were measured longitudinally in 427 women with singleton pregnancies in all three trimesters. First trimester PAPP-A and PAPP-A Multiples of Median (MOM) were measured. Student's T and Wilcoxon tests compared preeclamptic and normal pregnancies. A linear mixed model assessed the relationship between log PAPP-A and serial log sFlt-1 levels. PAPP-A and PAPP-A MOM levels were significantly lower in preeclamptic (n = 19), versus normal pregnancies (p = 0.02). Although mean third trimester sFlt-1 levels were significantly higher in preeclampsia (p = 0.002), first trimester sFlt-1 levels were lower in women who developed preeclampsia, compared with normal pregnancies (p = 0.03). PAPP-A levels correlated significantly with serial sFlt-1 levels. Importantly, low first trimester PAPP-A MOM predicted decreased odds of normal pregnancy (OR 0.2, p = 0.002). Low first trimester PAPP-A levels suggests increased future risk of preeclampsia and correlate with serial sFlt-1 levels throughout pregnancy. Furthermore, low first trimester PAPP-A status significantly predicted decreased odds of normal pregnancy.
Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I
2011-01-01
By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.
Yang, Fen; Wang, Meng; Wang, Zunyao
2013-09-01
This work studies the sorption behaviors of phthalic acid esters (PAEs) on three soils by batch equilibration experiments and quantitative structure property relationship (QSPR) methodology. Firstly, the effects of soil type, dissolved organic matter and pH on the sorption of four PAEs (DMP, DEP, DAP, DBP) are investigated. The results indicate that the soil organic carbon content has a crucial influence on sorption progress. In addition, a negative correlation between pH values and the sorption capacities was found for these four PAEs. However, the effect of DOM on PAEs sorption may be more complicated. The sorption of four PAEs was promoted by low concentrations of DOM, while, in the case of high concentrations, the influence of DOM on the sorption was complicated. Then the organic carbon content normalized sorption coefficient (logKoc) values of 17 PAEs on three soils were measured, and the mean values ranged from 1.50 to 7.57. The logKoc values showed good correlation with the corresponding logKow values. Finally, two QSPR models were developed with 13 theoretical parameters to get reliable logKoc predictions. The leave-one-out cross validation (CV-LOO) indicated that the internal predictive power of the two models was satisfactory. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Rotational periods and other parameters of magnetars
NASA Astrophysics Data System (ADS)
Malov, I. F.
2006-05-01
The rotational periods P, period derivatives dP/dt, and magnetic fields B in the region where the emission of anomalous X-ray pulsars (AXPs) and soft gamma-ray repeaters (SGRs) is generated are calculated using a model that associates the emission of these objects with the existence of drift waves at the periphery of the magnetosphere of a neutron star. The values obtained for these parameters are P = 11-737 ms, dP/dt = 3.7 × 10-16-5.5 × 10-12, and log B (G) = 2.63-6.25. We find a dependence between the X-ray luminosity of AXPs and SGRs, L x, and the rate at which they lose rotational energy, dE/dt, which is similar to the L x(dE/dt) dependence for radio pulsars with detected X-ray emission. Within the errors, AXPs/SGRs and radio pulsars with short periods (P < 0.1 s) display the same slopes for their log(dP/dt)-log P relations and for the dependence of the efficiency of their transformation of rotational energy into radiation on their periods. A dipole model is used to calculate the surface magnetic fields of the neutron stars in AXPs and SGRs, which turn out to be, on average, comparable to the surface fields of normal radio pulsars (
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Tsao Wu, Maya; Armitage, M Diane; Trujillo, Claire; Trujillo, Anna; Arnold, Laura E; Tsao Wu, Lauren; Arnold, Robert W
2017-12-04
We needed to validate and calibrate our portable acuity screening tools so amblyopia could be detected quickly and effectively at school entry. Spiral-bound flip cards and download pdf surround HOTV acuity test box with critical lines were combined with a matching card. Amblyopic patients performed critical line, then threshold acuity which was then compared to patched E-ETDRS acuity. 5 normal subjects wore Bangerter foil goggles to simulate blur for comparative validation. The 31 treated amblyopic eyes showed: logMAR HOTV = 0.97(logMAR E-ETDRS)-0.04 r2 = 0.88. All but two (6%) fell less than 2 lines difference. The five showed logMAR HOTV = 1.09 ((logMAR E-ETDRS) + .15 r2 = 0.63. The critical-line, test box was 98% efficient at screening within one line of 20/40. These tools reliably detected acuity in treated amblyopic patients and Bangerter blurred normal subjects. These free and affordable tools provide sensitive screening for amblyopia in children from public, private and home schools. Changing "pass" criteria to 4 out of 5 would improve sensitivity with somewhat slower testing for all students.
Modeling near wall effects in second moment closures by elliptic relaxation
NASA Technical Reports Server (NTRS)
Laurence, D.; Durbin, P.
1994-01-01
The elliptic relaxation model of Durbin (1993) for modeling near-wall turbulence using second moment closures (SMC) is compared to DNS data for a channel flow at Re(sub t) = 395. The agreement for second order statistics and even the terms in their balance equation is quite satisfactory, confirming that very little viscous effects (via Kolmogoroff scales) need to be added to the high Reynolds versions of SMC for near-wall-turbulence. The essential near-wall feature is thus the kinematic blocking effect that a solid wall exerts on the turbulence through the fluctuating pressure, which is best modeled by an elliptic operator. Above the transition layer, the effect of the original elliptic operator decays rapidly, and it is suggested that the log-layer is better reproduced by adding a non-homogeneous reduction of the return to isotropy, the gradient of the turbulent length scale being used as a measure of the inhomogeneity of the log-layer. The elliptic operator was quite easily applied to the non-linear Craft & Launder pressure-strain model yielding an improved distinction between the spanwise and wall normal stresses, although at higher Reynolds number (Re) and away from the wall, the streamwise component is severely underpredicted, as well as the transition in the mean velocity from the log to the wake profiles. In this area a significant change of behavior was observed in the DNS pressure-strain term, entirely ignored in the models.
Modeling near wall effects in second moment closures by elliptic relaxation
NASA Astrophysics Data System (ADS)
Laurence, D.; Durbin, P.
1994-12-01
The elliptic relaxation model of Durbin (1993) for modeling near-wall turbulence using second moment closures (SMC) is compared to DNS data for a channel flow at Re(sub t) = 395. The agreement for second order statistics and even the terms in their balance equation is quite satisfactory, confirming that very little viscous effects (via Kolmogoroff scales) need to be added to the high Reynolds versions of SMC for near-wall-turbulence. The essential near-wall feature is thus the kinematic blocking effect that a solid wall exerts on the turbulence through the fluctuating pressure, which is best modeled by an elliptic operator. Above the transition layer, the effect of the original elliptic operator decays rapidly, and it is suggested that the log-layer is better reproduced by adding a non-homogeneous reduction of the return to isotropy, the gradient of the turbulent length scale being used as a measure of the inhomogeneity of the log-layer. The elliptic operator was quite easily applied to the non-linear Craft & Launder pressure-strain model yielding an improved distinction between the spanwise and wall normal stresses, although at higher Reynolds number (Re) and away from the wall, the streamwise component is severely underpredicted, as well as the transition in the mean velocity from the log to the wake profiles. In this area a significant change of behavior was observed in the DNS pressure-strain term, entirely ignored in the models.
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
Investigating uplift in the South-Western Barents Sea using sonic and density well log measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Ellis, M.
2014-12-01
Sediments in the Barents Sea have undergone large amounts of uplift due to Plio-Pleistoncene deglaciation as well as Palaeocene-Eocene Atlantic rifting. Uplift affects the reservoir quality, seal capacity and fluid migration. Therefore, it is important to gain reliable uplift estimates in order to evaluate the petroleum prospectivity properly. To this end, a number of quantification methods have been proposed, such as Apatite Fission Track Analysis (AFTA), and integration of seismic surveys with well log data. AFTA usually provides accurate uplift estimates, but the data is limited due to its high cost. While the seismic survey can provide good uplift estimate when well data is available for calibration, the uncertainty can be large in areas where there is little to no well data. We estimated South-Western Barents Sea uplift based on well data from the Norwegian Petroleum Directorate. Primary assumptions include time-irreversible shale compaction trends and a universal normal compaction trend for a specified formation. Sonic and density logs from two Cenozoic shale formation intervals, Kolmule and Kolje, were used for the study. For each formation, we studied logs of all released wells, and established exponential normal compaction trends based on a single well. That well was then deemed the reference well, and relative uplift can be calculated at other well locations based on the offset from the normal compaction trend. We found that the amount of uplift increases along the SW to NE direction, with a maximum difference of 1,447 m from the Kolje FM estimate, and 699 m from the Kolmule FM estimate. The average standard deviation of the estimated uplift is 130 m for the Kolje FM, and 160 m for the Kolmule FM using the density log. While results from density logs and sonic logs have good agreement in general, the density log provides slightly better results in terms of higher consistency and lower standard deviation. Our results agree with published papers qualitatively with some differences in the actual amount of uplifts. The results are considered to be more accurate due to the higher resolution of the log scale data that was used.
Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.
Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R
2012-08-01
Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.
Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L
2011-10-01
Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.
NASA Astrophysics Data System (ADS)
Panhwar, Sher Khan; Liu, Qun; Khan, Fozia; Siddiqui, Pirzada J. A.
2012-03-01
Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis), we analyzed the catch and effort data of Sillago sihama fishery in Pakistan. ASPIC estimates the parameters of MSY (maximum sustainable yield), F msy (fishing mortality), q (catchability coefficient), K (carrying capacity or unexploited biomass) and B1/K (maximum sustainable yield over initial biomass). The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t, which showed that the Fox model estimation was more conservative than that with the logistic model. The R 2 with the logistic model (0.702) is larger than that with the Fox model (0.541), which indicates a better fit. The coefficient of variation (cv) of the estimated MSY was about 0.3, except for a larger value 88.87 and a smaller value of 0.173. In contrast to the ASPIC results, the R 2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567), indicating a better fit. The key parameters of CEDA are: MSY, K, q, and r (intrinsic growth), and the three error assumptions in using the models are normal, log normal and gamma. Parameter estimates from the Schaefer and Pella-Tomlinson models were similar. The MSY estimations from the above two models were 398 t, 549 t and 398 t for normal, log-normal and gamma error distributions, respectively. The MSY estimates from the Fox model were 381 t, 366 t and 366 t for the above three error assumptions, respectively. The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models. In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model, MSY for S. sihama is about 400 t. As the catch in 2003 was 401 t, we would suggest the fishery should be kept at the current level. Production models used here depend on the assumption that CPUE (catch per unit effort) data used in the study can reliably quantify temporal variability in population abundance, hence the modeling results would be wrong if such an assumption is not met. Because the reliability of this CPUE data in indexing fish population abundance is unknown, we should be cautious with the interpretation and use of the derived population and management parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Gary E.; Song, Joo Hyun; Lu, Wei
2007-06-15
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less
Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A
2007-06-01
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.
Effective Cyber Situation Awareness (CSA) Assessment and Training
2013-11-01
activity/scenario. y. Save Wireshark Captures. z. Save SNORT logs. aa. Save MySQL databases. 4. After the completion of the scenario, the reversion...line or from custom Java code. • Cisco ASA Parser: Builds normalized vendor-neutral firewall rule specifications from Cisco ASA and PIX firewall...The Service tool lets analysts build Cauldron models from either the command line or from custom Java code. Functionally, it corresponds to the
Analysis of Activity Patterns and Performance in Polio Survivors
2006-10-01
variable were inspected for asymmetry and long-tailedness and normality. When appropriate, transformations (e.g. log function) were made. Data were...thighs and a combined pelvis -HAT segment was used for our analyses. The ankles were modeled as universal joints, the knees as revolutes, and the...segment, lumped pelvis + HAT, universal ankle, revolute knee, spherical hip; pin at CP entire stance Stance sagittal knee and frontal hip
Song, Rui; Kosorok, Michael R.; Cai, Jianwen
2009-01-01
Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107
NASA Astrophysics Data System (ADS)
Wu, H. Y.; Chan, C. H.
2016-12-01
Nowadays, IODP keeps investigating the scientific drilling in Nakai of southwest Japan from 2006. During this decade, we collected the massive logging data and core samples in this area for determining the stress evolution in this interseimic period after 1944 Tonakai earthquake. One of key assumption in Nankai seismogenic zone is the stress accumulation on the plate boundary should be the thrust-fault stress regime (SHmax>Shmin> Sv). In this research, the slip-deficit model is used to determine the wide scale stress field. The drilled IODP well sites are designed to be the fine control points. Based on the multiple ICDP expeditions near the Nankai trough (C0002A, F, and P) in different depths, the three dimensional stress estimation can be confirmed with the lateral boreholes loggings. Even the recently drilling did not reach the subduction zone, our model provides the considerable results by the reliable boundary conditions. This model simulated the stress orientation and magnitude generated by the slip-deficit model, area seismicity, and borehole loggings. Our results indicated that the stress state keeps in normal-faulting stress regime in our research area, even near the Nankai trough. Although the stress magnitude is increasing with the depth, one of horizontal principal stresses (Shmin) is hardly greater than the vertical stress (over-burden weight) in the reachable depth (>10km). This result implies the pore-pressure anomaly would happen during the slip and the stress state would be varied in different stages when event occurred
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, John M., E-mail: jrobertson@beaumont.ed; Soehn, Matthias; Yan Di
Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to developmore » a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.« less
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
TESTING THE PROPAGATING FLUCTUATIONS MODEL WITH A LONG, GLOBAL ACCRETION DISK SIMULATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, J Drew; Reynolds, Christopher S.
2016-07-20
The broadband variability of many accreting systems displays characteristic structures; log-normal flux distributions, root-mean square (rms)-flux relations, and long inter-band lags. These characteristics are usually interpreted as inward propagating fluctuations of the mass accretion rate in an accretion disk driven by stochasticity of the angular momentum transport mechanism. We present the first analysis of propagating fluctuations in a long-duration, high-resolution, global three-dimensional magnetohydrodynamic (MHD) simulation of a geometrically thin ( h / r ≈ 0.1) accretion disk around a black hole. While the dynamical-timescale turbulent fluctuations in the Maxwell stresses are too rapid to drive radially coherent fluctuations in themore » accretion rate, we find that the low-frequency quasi-periodic dynamo action introduces low-frequency fluctuations in the Maxwell stresses, which then drive the propagating fluctuations. Examining both the mass accretion rate and emission proxies, we recover log-normality, linear rms-flux relations, and radial coherence that would produce inter-band lags. Hence, we successfully relate and connect the phenomenology of propagating fluctuations to modern MHD accretion disk theory.« less
Simulations of large acoustic scintillations in the straits of Florida.
Tang, Xin; Tappert, F D; Creamer, Dennis B
2006-12-01
Using a full-wave acoustic model, Monte Carlo numerical studies of intensity fluctuations in a realistic shallow water environment that simulates the Straits of Florida, including internal wave fluctuations and bottom roughness, have been performed. Results show that the sound intensity at distant receivers scintillates dramatically. The acoustic scintillation index SI increases rapidly with propagation range and is significantly greater than unity at ranges beyond about 10 km. This result supports a theoretical prediction by one of the authors. Statistical analyses show that the distribution of intensity of the random wave field saturates to the expected Rayleigh distribution with SI= 1 at short range due to multipath interference effects, and then SI continues to increase to large values. This effect, which is denoted supersaturation, is universal at long ranges in waveguides having lossy boundaries (where there is differential mode attenuation). The intensity distribution approaches a log-normal distribution to an excellent approximation; it may not be a universal distribution and comparison is also made to a K distribution. The long tails of the log-normal distribution cause "acoustic intermittency" in which very high, but rare, intensities occur.
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
PHAGE FORMATION IN STAPHYLOCOCCUS MUSCAE CULTURES
Price, Winston H.
1949-01-01
1. The total nucleic acid synthesized by normal and by infected S. muscae suspensions is approximately the same. This is true for either lag phase cells or log phase cells. 2. The amount of nucleic acid synthesized per cell in normal cultures increases during the lag period and remains fairly constant during log growth. 3. The amount of nucleic acid synthesized per cell by infected cells increases during the whole course of the infection. 4. Infected cells synthesize less RNA and more DNA than normal cells. The ratio of RNA/DNA is larger in lag phase cells than in log phase cells. 5. Normal cells release neither ribonucleic acid nor desoxyribonucleic acid into the medium. 6. Infected cells release both ribonucleic acid and desoxyribonucleic acid into the medium. The time and extent of release depend upon the physiological state of the cells. 7. Infected lag phase cells may or may not show an increased RNA content. They release RNA, but not DNA, into the medium well before observable cellular lysis and before any virus is liberated. At virus liberation, the cell RNA content falls to a value below that initially present, while DNA, which increased during infection falls to approximately the original value. 8. Infected log cells show a continuous loss of cell RNA and a loss of DNA a short time after infection. At the time of virus liberation the cell RNA value is well below that initially present and the cells begin to lyse. PMID:18139006
Chauzeix, Jasmine; Laforêt, Marie-Pierre; Deveza, Mélanie; Crowther, Liam; Marcellaud, Elodie; Derouault, Paco; Lia, Anne-Sophie; Boyer, François; Bargues, Nicolas; Olombel, Guillaume; Jaccard, Arnaud; Feuillard, Jean; Gachard, Nathalie; Rizzo, David
2018-05-09
More than 35 years after the Binet classification, there is still a need for simple prognostic markers in chronic lymphocytic leukemia (CLL). Here, we studied the treatment-free survival (TFS) impact of normal serum protein electrophoresis (SPE) at diagnosis. One hundred twelve patients with CLL were analyzed. The main prognostic factors (Binet stage; lymphocytosis; IGHV mutation status; TP53, SF3B1, NOTCH1, and BIRC3 mutations; and cytogenetic abnormalities) were studied. The frequencies of IGHV mutation status, cytogenetic abnormalities, and TP53, SF3B1, NOTCH1, and BIRC3 mutations were not significantly different between normal and abnormal SPE. Normal SPE was associated with Binet stage A, nonprogressive disease for 6 months, lymphocytosis below 30 G/L, and the absence of the IGHV3-21 gene rearrangement which is associated with poor prognosis. The TFS of patients with normal SPE was significantly longer than that of patients with abnormal SPE (log-rank test: P = 0.0015, with 51% untreated patients at 5.6 years and a perfect plateau afterward vs. a median TFS at 2.64 years for abnormal SPE with no plateau). Multivariate analysis using two different Cox models and bootstrapping showed that normal SPE was an independent good prognostic marker for either Binet stage, lymphocytosis, or IGHV mutation status. TFS was further increased when both normal SPE and mutated IGHV were present (log-rank test: P = 0.008, median not reached, plateau at 5.6 years and 66% untreated patients). A comparison with other prognostic markers suggested that normal SPE could reflect slowly advancing CLL disease. Altogether, our results show that a combination of normal SPE and mutated IGHV genes defines a subgroup of patients with CLL who evolve very slowly and who might never need treatment. © 2018 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Chapter 9:Red maple lumber resources for glued-laminated timber beams
John J. Janowiak; Harvey B. Manbeck; Roland Hernandez; Russell C. Moody
2005-01-01
This chapter evaluates the performance of red maple glulam beams made from two distinctly different lumber resources: 1. logs sawn using practices normally used for hardwood appearance lumber recovery; and 2. lower-grade, smaller-dimension lumber primarily obtained from residual log cants.
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1976-01-01
A study was made of the field size distributions for LACIE test sites 5029, 5033, and 5039, People's Republic of China. Field lengths and widths were measured from LANDSAT imagery, and field area was statistically modeled. Field size parameters have log-normal or Poisson frequency distributions. These were normalized to the Gaussian distribution and theoretical population curves were made. When compared to fields in other areas of the same country measured in the previous study, field lengths and widths in the three LACIE test sites were 2 to 3 times smaller and areas were smaller by an order of magnitude.
Tiwari, Anjani K; Ojha, Himanshu; Kaul, Ankur; Dutta, Anupama; Srivastava, Pooja; Shukla, Gauri; Srivastava, Rakesh; Mishra, Anil K
2009-07-01
Nuclear magnetic resonance imaging is a very useful tool in modern medical diagnostics, especially when gadolinium (III)-based contrast agents are administered to the patient with the aim of increasing the image contrast between normal and diseased tissues. With the use of soft modelling techniques such as quantitative structure-activity relationship/quantitative structure-property relationship after a suitable description of their molecular structure, we have studied a series of phosphonic acid for designing new MRI contrast agent. Quantitative structure-property relationship studies with multiple linear regression analysis were applied to find correlation between different calculated molecular descriptors of the phosphonic acid-based chelating agent and their stability constants. The final quantitative structure-property relationship mathematical models were found as--quantitative structure-property relationship Model for phosphonic acid series (Model 1)--log K(ML) = {5.00243(+/-0.7102)}- MR {0.0263(+/-0.540)}n = 12 l r l = 0.942 s = 0.183 F = 99.165 quantitative structure-property relationship Model for phosphonic acid series (Model 2)--log K(ML) = {5.06280(+/-0.3418)}- MR {0.0252(+/- .198)}n = 12 l r l = 0.956 s = 0.186 F = 99.256.
Baseline MNREAD Measures for Normally Sighted Subjects From Childhood to Old Age
Calabrèse, Aurélie; Cheong, Allen M. Y.; Cheung, Sing-Hang; He, Yingchen; Kwon, MiYoung; Mansfield, J. Stephen; Subramanian, Ahalya; Yu, Deyue; Legge, Gordon E.
2016-01-01
Purpose The continuous-text reading-acuity test MNREAD is designed to measure the reading performance of people with normal and low vision. This test is used to estimate maximum reading speed (MRS), critical print size (CPS), reading acuity (RA), and the reading accessibility index (ACC). Here we report the age dependence of these measures for normally sighted individuals, providing baseline data for MNREAD testing. Methods We analyzed MNREAD data from 645 normally sighted participants ranging in age from 8 to 81 years. The data were collected in several studies conducted by different testers and at different sites in our research program, enabling evaluation of robustness of the test. Results Maximum reading speed and reading accessibility index showed a trilinear dependence on age: first increasing from 8 to 16 years (MRS: 140–200 words per minute [wpm]; ACC: 0.7–1.0); then stabilizing in the range of 16 to 40 years (MRS: 200 ± 25 wpm; ACC: 1.0 ± 0.14); and decreasing to 175 wpm and 0.88 by 81 years. Critical print size was constant from 8 to 23 years (0.08 logMAR), increased slowly until 68 years (0.21 logMAR), and then more rapidly until 81 years (0.34 logMAR). logMAR reading acuity improved from −0.1 at 8 years to −0.18 at 16 years, then gradually worsened to −0.05 at 81 years. Conclusions We found a weak dependence of the MNREAD parameters on age in normal vision. In broad terms, MNREAD performance exhibits differences between three age groups: children 8 to 16 years, young adults 16 to 40 years, and middle-aged to older adults >40 years. PMID:27442222
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Guillermo A. Mendoza; Roger J. Meimban; Philip A. Araman; William G. Luppold
1991-01-01
A log inventory model and a real-time hardwood process simulation model were developed and combined into an integrated production planning and control system for hardwood sawmills. The log inventory model was designed to monitor and periodically update the status of the logs in the log yard. The process simulation model was designed to estimate various sawmill...
Mesh size selectivity of the gillnet in East China Sea
NASA Astrophysics Data System (ADS)
Li, L. Z.; Tang, J. H.; Xiong, Y.; Huang, H. L.; Wu, L.; Shi, J. J.; Gao, Y. S.; Wu, F. Q.
2017-07-01
A production test using several gillnets with various mesh sizes was carried out to discover the selectivity of gillnets in the East China Sea. The result showed that the composition of the catch species was synthetically affected by panel height and mesh size. The bycatch species of the 10-m nets were more than those of the 6-m nets. For target species, the effect of panel height on juvenile fish was ambiguous, but the number of juvenile fish declined quickly with the increase in mesh size. According to model deviance (D) and Akaike’s information criterion, the bi-normal model provided the best fit for small yellow croaker (Larimichthy polyactis), and the relative retention was 0.2 and 1, respectively. For Chelidonichthys spinosus, the log-normal was the best model; the right tilt of the selectivity curve was obvious and well coincided with the original data. The contact population of small yellow croaker showed a bi-normal distribution, and body lengths ranged from 95 to 215 mm. The contact population of C. spinosus showed a normal distribution, and the body lengths ranged from 95 to 205 mm. These results can provide references for coastal fishery management.
Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A
1988-12-01
Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations
Jingxin Wang; Chris B. LeDoux; Lihai Wang
2003-01-01
The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...
A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners
2013-08-01
best-suited for regression. Our baseline uses z-normalized shallow length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari...length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari, English and Pashto. We compare Support Vector Machines and the Margin...football, whereas they are much less common in documents about opera). We used TF -LOG weighted word frequencies on bag-of-words for each document
Mataragas, M; Alessandria, V; Rantsiou, K; Cocolin, L
2015-08-01
In the present work, a demonstration is made on how the risk from the presence of Listeria monocytogenes in fermented sausages can be managed using the concept of Food Safety Objective (FSO) aided by stochastic modeling (Bayesian analysis and Monte Carlo simulation) and meta-analysis. For this purpose, the ICMSF equation was used, which combines the initial level (H0) of the hazard and its subsequent reduction (ΣR) and/or increase (ΣI) along the production chain. Each element of the equation was described by a distribution to investigate the effect not only of the level of the hazard, but also the effect of the accompanying variability. The distribution of each element was determined by Bayesian modeling (H0) and meta-analysis (ΣR and ΣI). The output was a normal distribution N(-5.36, 2.56) (log cfu/g) from which the percentage of the non-conforming products, i.e. the fraction above the FSO of 2 log cfu/g, was estimated at 0.202%. Different control measures were examined such as lowering initial L. monocytogenes level and inclusion of an additional killing step along the process resulting in reduction of the non-conforming products from 0.195% to 0.003% based on the mean and/or square-root change of the normal distribution, and 0.001%, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ventilation-perfusion distribution in normal subjects.
Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A
2012-09-01
Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.
Raghu, S; Sriraam, N; Kumar, G Pradeep
2017-02-01
Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.
Bun, Shogyoku; Ikejima, Chiaki; Kida, Jiro; Yoshimura, Atsuko; Lebowitz, Adam Jon; Kakuma, Tatsuyuki; Asada, Takashi
2015-01-01
A number of studies have examined the effect of a single supplement against Alzheimer's disease (AD) with conflicting results. Taking into account the complex and multifactorial nature of AD pathogenesis, multiple supplements may be more effective. Physical activity is another prospect against AD. An open-label intervention study was conducted to explore a potential protective effect of multiple supplements and physical activity. Their interaction was also examined. Participants were community-dwelling volunteers aged 65 or older as of May 2001 in a rural area of Japan. Among 918 cognitively normal participants included in the analyses, 171 took capsules daily for three years that contained n-3 polyunsaturated fatty acid, Ginkgo biloba leaf dry extracts, and lycopene. Two hundred and forty one participants joined the two-year exercise intervention that included a community center-based and a home-based exercise program. One-hundred and forty eight participated in both interventions. A standardized neuropsychological battery was administered at baseline in 2001, the first follow-up in 2004-2005, and the second in 2008-2009. The primary outcome was AD diagnosis at follow-ups. A complementary log-log model was used for survival analysis. A total of 76 participants were diagnosed with AD during follow-up periods. Higher adherence to supplementation intervention was associated with lower AD incidence in both unadjusted and adjusted models. Exercise intervention was also associated with lower AD incidence in the unadjusted model, but not in the adjusted model. We hypothesized that the combination of supplements acted in a complementary and synergistic fashion to bring significant effects against AD occurrence.
Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J
2017-07-17
Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Assessment of visual disability using visual evoked potentials.
Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun
2012-08-06
The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability.
Assessment of visual disability using visual evoked potentials
2012-01-01
Background The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. Methods A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9–42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19–36 years), 19 optic neuritis patients (19 eyes: ages 9–71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Results Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = −0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = −0.072x + 1.22 (−0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Conclusions Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability. PMID:22866948
Gregoire, C.; Joesten, P.K.; Lane, J.W.
2006-01-01
Ground penetrating radar is an efficient geophysical method for the detection and location of fractures and fracture zones in electrically resistive rocks. In this study, the use of down-hole (borehole) radar reflection logs to monitor the injection of steam in fractured rocks was tested as part of a field-scale, steam-enhanced remediation pilot study conducted at a fractured limestone quarry contaminated with chlorinated hydrocarbons at the former Loring Air Force Base, Limestone, Maine, USA. In support of the pilot study, borehole radar reflection logs were collected three times (before, during, and near the end of steam injection) using broadband 100 MHz electric dipole antennas. Numerical modelling was performed to predict the effect of heating on radar-frequency electromagnetic (EM) wave velocity, attenuation, and fracture reflectivity. The modelling results indicate that EM wave velocity and attenuation change substantially if heating increases the electrical conductivity of the limestone matrix. Furthermore, the net effect of heat-induced variations in fracture-fluid dielectric properties on average medium velocity is insignificant because the expected total fracture porosity is low. In contrast, changes in fracture fluid electrical conductivity can have a significant effect on EM wave attenuation and fracture reflectivity. Total replacement of water by steam in a fracture decreases fracture reflectivity of a factor of 10 and induces a change in reflected wave polarity. Based on the numerical modelling results, a reflection amplitude analysis method was developed to delineate fractures where steam has displaced water. Radar reflection logs collected during the three acquisition periods were analysed in the frequency domain to determine if steam had replaced water in the fractures (after normalizing the logs to compensate for differences in antenna performance between logging runs). Analysis of the radar reflection logs from a borehole where the temperature increased substantially during the steam injection experiment shows an increase in attenuation and a decrease in reflectivity in the vicinity of the borehole. Results of applying the reflection amplitude analysis method developed for this study indicate that steam did not totally replace the water in most of the fractures. The observed decreases in reflectivity were consistent with an increase in fracture-water temperature, rather than the presence of steam. A limiting assumption of the reflection amplitude analysis method is the requirement for complete displacement of water in a fracture by steam. ?? 2006 Elsevier B.V. All rights reserved.
A method for the quantification of biased signalling at constitutively active receptors.
Hall, David A; Giraldo, Jesús
2018-06-01
Biased agonism, the ability of an agonist to differentially activate one of several signal transduction pathways when acting at a given receptor, is an increasingly recognized phenomenon at many receptors. The Black and Leff operational model lacks a way to describe constitutive receptor activity and hence inverse agonism. Thus, it is impossible to analyse the biased signalling of inverse agonists using this model. In this theoretical work, we develop and illustrate methods for the analysis of biased inverse agonism. Methods were derived for quantifying biased signalling in systems that demonstrate constitutive activity using the modified operational model proposed by Slack and Hall. The methods were illustrated using Monte Carlo simulations. The Monte Carlo simulations demonstrated that, with an appropriate experimental design, the model parameters are 'identifiable'. The method is consistent with methods based on the measurement of intrinsic relative activity (RA i ) (ΔΔlogR or ΔΔlog(τ/K a )) proposed by Ehlert and Kenakin and their co-workers but has some advantages. In particular, it allows the quantification of ligand bias independently of 'system bias' removing the requirement to normalize to a standard ligand. In systems with constitutive activity, the Slack and Hall model provides methods for quantifying the absolute bias of agonists and inverse agonists. This provides an alternative to methods based on RA i and is complementary to the ΔΔlog(τ/K a ) method of Kenakin et al. in systems where use of that method is inappropriate due to the presence of constitutive activity. © 2018 The British Pharmacological Society.
Cross-platform normalization of microarray and RNA-seq data for machine learning applications
Thompson, Jeffrey A.; Tan, Jie
2016-01-01
Large, publicly available gene expression datasets are often analyzed with the aid of machine learning algorithms. Although RNA-seq is increasingly the technology of choice, a wealth of expression data already exist in the form of microarray data. If machine learning models built from legacy data can be applied to RNA-seq data, larger, more diverse training datasets can be created and validation can be performed on newly generated data. We developed Training Distribution Matching (TDM), which transforms RNA-seq data for use with models constructed from legacy platforms. We evaluated TDM, as well as quantile normalization, nonparanormal transformation, and a simple log2 transformation, on both simulated and biological datasets of gene expression. Our evaluation included both supervised and unsupervised machine learning approaches. We found that TDM exhibited consistently strong performance across settings and that quantile normalization also performed well in many circumstances. We also provide a TDM package for the R programming language. PMID:26844019
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Seligman, D A; Pullinger, A G
2000-01-01
Confusion about the relationship of occlusion to temporomandibular disorders (TMD) persists. This study attempted to identify occlusal and attrition factors plus age that would characterize asymptomatic normal female subjects. A total of 124 female patients with intracapsular TMD were compared with 47 asymptomatic female controls for associations to 9 occlusal factors, 3 attrition severity measures, and age using classification tree, multiple stepwise logistic regression, and univariate analyses. Models were tested for accuracy (sensitivity and specificity) and total contribution to the variance. The classification tree model had 4 terminal nodes that used only anterior attrition and age. "Normals" were mainly characterized by low attrition levels, whereas patients had higher attrition and tended to be younger. The tree model was only moderately useful (sensitivity 63%, specificity 94%) in predicting normals. The logistic regression model incorporated unilateral posterior crossbite and mediotrusive attrition severity in addition to the 2 factors in the tree, but was slightly less accurate than the tree (sensitivity 51%, specificity 90%). When only occlusal factors were considered in the analysis, normals were additionally characterized by a lack of anterior open bite, smaller overjet, and smaller RCP-ICP slides. The log likelihood accounted for was similar for both the tree (pseudo R(2) = 29.38%; mean deviance = 0.95) and the multiple logistic regression (Cox Snell R(2) = 30.3%, mean deviance = 0.84) models. The occlusal and attrition factors studied were only moderately useful in differentiating normals from TMD patients.
Petrophysical evaluation of subterranean formations
Klein, James D; Schoderbek, David A; Mailloux, Jason M
2013-05-28
Methods and systems are provided for evaluating petrophysical properties of subterranean formations and comprehensively evaluating hydrate presence through a combination of computer-implemented log modeling and analysis. Certain embodiments include the steps of running a number of logging tools in a wellbore to obtain a variety of wellbore data and logs, and evaluating and modeling the log data to ascertain various petrophysical properties. Examples of suitable logging techniques that may be used in combination with the present invention include, but are not limited to, sonic logs, electrical resistivity logs, gamma ray logs, neutron porosity logs, density logs, NRM logs, or any combination or subset thereof.
Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.
Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu
2017-05-01
The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.
Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam
NASA Astrophysics Data System (ADS)
N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.
In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
Empirical analysis and modeling of manual turnpike tollbooths in China
NASA Astrophysics Data System (ADS)
Zhang, Hao
2017-03-01
To deal with low-level of service satisfaction at tollbooths of many turnpikes in China, we conduct an empirical study and use a queueing model to investigate performance measures. In this paper, we collect archived data from six tollbooths of a turnpike in China. Empirical analysis on vehicle's time-dependent arrival process and collector's time-dependent service time is conducted. It shows that the vehicle arrival process follows a non-homogeneous Poisson process while the collector service time follows a log-normal distribution. Further, we model the process of collecting tolls at tollbooths with MAP / PH / 1 / FCFS queue for mathematical tractability and present some numerical examples.
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
On the use of log-transformation vs. nonlinear regression for analyzing biological power laws
Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.
2011-01-01
Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.
Miller, Robert; Plessow, Franziska
2013-06-01
Endocrine time series often lack normality and homoscedasticity most likely due to the non-linear dynamics of their natural determinants and the immanent characteristics of the biochemical analysis tools, respectively. As a consequence, data transformation (e.g., log-transformation) is frequently applied to enable general linear model-based analyses. However, to date, data transformation techniques substantially vary across studies and the question of which is the optimum power transformation remains to be addressed. The present report aims to provide a common solution for the analysis of endocrine time series by systematically comparing different power transformations with regard to their impact on data normality and homoscedasticity. For this, a variety of power transformations of the Box-Cox family were applied to salivary cortisol data of 309 healthy participants sampled in temporal proximity to a psychosocial stressor (the Trier Social Stress Test). Whereas our analyses show that un- as well as log-transformed data are inferior in terms of meeting normality and homoscedasticity, they also provide optimum transformations for both, cross-sectional cortisol samples reflecting the distributional concentration equilibrium and longitudinal cortisol time series comprising systematically altered hormone distributions that result from simultaneously elicited pulsatile change and continuous elimination processes. Considering these dynamics of endocrine oscillations, data transformation prior to testing GLMs seems mandatory to minimize biased results. Copyright © 2012 Elsevier Ltd. All rights reserved.
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Predicting the Rate of River Bank Erosion Caused by Large Wood Log
NASA Astrophysics Data System (ADS)
Zhang, N.; Rutherfurd, I.; Ghisalberti, M.
2016-12-01
When a single tree falls into a river channel, flow is deflected and accelerated between the tree roots and the bank face, increasing shear stress and scouring the bank. The scallop shaped erosion increases the diversity of the channel morphology, but also causes concern for adjacent landholders. Concern about increased bank erosion is one of the main reasons for large wood to still be removed from channels in SE Australia. Further, the hydraulic effect of many logs in the channel can reduce overall bank erosion rates. Although both phenomena have been described before, this research develops a hydraulic model that estimates their magnitude, and tests and calibrates this model with flume and field measurements, with logs with various configurations and sizes. Specifically, the model estimates the change in excess shear stress on the bank associated . The model addresses the effect of the log angle, distance from bank, and log size and flow condition by solving the mass continuity and energy conservation between the cross section at the approaching flow and contracted flow. Then, we evaluate our model against flume experiment preformed with semi-realistic log models to represent logs in different sizes and decay stages by comparing the measured and simulated velocity increase in the gap between the log and the bank. The log angle, distance from bank, and flow condition are systemically varied for each log model during the experiment. Final, the calibrated model is compared with the field data collected in anabranching channels of Murray River in SE Australia where there are abundant instream logs and regulated and consistent high flow for irrigation. Preliminary results suggest that a log can significantly increase the shear stress on the bank, especially when it positions perpendicular to the flow. The shear stress increases with the log angle in a rising curve (The log angle is the angle between log trunk and flow direction. 0o means log is parallel to flow with canopy pointing downstream). However, the shear stress shows insignificant changes as the log is being moved close to the bank.
Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi
2014-05-01
The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
Box-Cox transformation of firm size data in statistical analysis
NASA Astrophysics Data System (ADS)
Chen, Ting Ting; Takaishi, Tetsuya
2014-03-01
Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.
Probing star formation relations of mergers and normal galaxies across the CO ladder
NASA Astrophysics Data System (ADS)
Greve, Thomas R.
We examine integrated luminosity relations between the IR continuum and the CO rotational ladder observed for local (ultra) luminous infra-red galaxies ((U)LIRGs, L IR >= 1011 M⊙) and normal star forming galaxies in the context of radiation pressure regulated star formation proposed by Andrews & Thompson (2011). This can account for the normalization and linear slopes of the luminosity relations (log L IR = α log L'CO + β) of both low- and high-J CO lines observed for normal galaxies. Super-linear slopes occur for galaxy samples with significantly different dense gas fractions. Local (U)LIRGs are observed to have sub-linear high-J (J up > 6) slopes or, equivalently, increasing L COhigh-J /L IR with L IR. In the extreme ISM conditions of local (U)LIRGs, the high-J CO lines no longer trace individual hot spots of star formation (which gave rise to the linear slopes for normal galaxies) but a more widespread warm and dense gas phase mechanically heated by powerful supernovae-driven turbulence and shocks.
Chakraborty, Somsubhra; Weindorf, David C; Li, Bin; Ali, Md Nasim; Majumdar, K; Ray, D P
2014-07-01
This pilot study compared penalized spline regression (PSR) and random forest (RF) regression using visible and near-infrared diffuse reflectance spectroscopy (VisNIR DRS) derived spectra of 164 petroleum contaminated soils after two different spectral pretreatments [first derivative (FD) and standard normal variate (SNV) followed by detrending] for rapid quantification of soil petroleum contamination. Additionally, a new analytical approach was proposed for the recovery of the pure spectral and concentration profiles of n-hexane present in the unresolved mixture of petroleum contaminated soils using multivariate curve resolution alternating least squares (MCR-ALS). The PSR model using FD spectra (r(2) = 0.87, RMSE = 0.580 log10 mg kg(-1), and residual prediction deviation = 2.78) outperformed all other models tested. Quantitative results obtained by MCR-ALS for n-hexane in presence of interferences (r(2) = 0.65 and RMSE 0.261 log10 mg kg(-1)) were comparable to those obtained using FD (PSR) model. Furthermore, MCR ALS was able to recover pure spectra of n-hexane. Copyright © 2014 Elsevier Ltd. All rights reserved.
Limits on Log Cross-Product Ratios for Item Response Models. Research Report. ETS RR-06-10
ERIC Educational Resources Information Center
Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip
2006-01-01
Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei
2017-04-01
Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.
210Po Log-normal distribution in human urines: Survey from Central Italy people
Sisti, D.; Rocchi, M. B. L.; Meli, M. A.; Desideri, D.
2009-01-01
The death in London of the former secret service agent Alexander Livtinenko on 23 November 2006 generally attracted the attention of the public to the rather unknown radionuclide 210Po. This paper presents the results of a monitoring programme of 210Po background levels in the urines of noncontaminated people living in Central Italy (near the Republic of S. Marino). The relationship between age, sex, years of smoking, number of cigarettes per day, and 210Po concentration was also studied. The results indicated that the urinary 210Po concentration follows a surprisingly perfect Log-normal distribution. Log 210Po concentrations were positively correlated to age (p < 0.0001), number of daily smoked cigarettes (p = 0.006), and years of smoking (p = 0.021), and associated to sex (p = 0.019). Consequently, this study provides upper reference limits for each sub-group identified by significantly predictive variables. PMID:19750019
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
NASA Astrophysics Data System (ADS)
Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.
2016-07-01
Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.
Gómez-Novo, Miriam; Boga, José A; Álvarez-Argüelles, Marta E; Rojo-Alba, Susana; Fernández, Ana; Menéndez, María J; de Oña, María; Melón, Santiago
2018-05-01
Human respiratory syncytial virus (HRSV) is a common cause of respiratory infections. The main objective is to analyze the prediction ability of viral load of HRSV normalized by cell number in respiratory symptoms. A prospective, descriptive, and analytical study was performed. From 7307 respiratory samples processed between December 2014 to April 2016, 1019 HRSV-positive samples, were included in this study. Low respiratory tract infection was present in 729 patients (71.54%). Normalized HRSV load was calculated by quantification of HRSV genome and human β-globin gene and expressed as log10 copies/1000 cells. HRSV mean loads were 4.09 ± 2.08 and 4.82 ± 2.09 log10 copies/1000 cells in the 549 pharyngeal and 470 nasopharyngeal samples, respectively (P < 0.001). The viral mean load was 4.81 ± 1.98 log10 copies/1000 cells for patients under the age of 4-year-old (P < 0.001). The viral mean loads were 4.51 ± 2.04 cells in patients with low respiratory tract infection and 4.22 ± 2.28 log10 copies/1000 cells with upper respiratory tract infection or febrile syndrome (P < 0.05). A possible cut off value to predict LRTI evolution was tentatively established. Normalization of viral load by cell number in the samples is essential to ensure an optimal virological molecular diagnosis avoiding that the quality of samples affects the results. A high viral load can be a useful marker to predict disease progression. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nieber, J. L.; Li, W.
2017-12-01
The instantaneous groundwater discharge (Qgw) from a watershed is related to volume of drainable water stored (Sgw) within the watershed aquifer(s). The relation is hysteretic and the magnitude of the hysteresis is completely scale-dependent. In the research reported here we apply a previously calibrated (USGS) GSFLOW model to the simulation of surface and subsurface runoff for the Sagehen Creek watershed. This 29.3 km2 watershed is located in the eastern range of the Sierra Nevada Mountains, and most of the precipitation falls in the form of snow. The GSFLOW model is composed of a surface water and shallow subsurface flow hydrology model, PRMS, and a groundwater flow component based on MODFLOW. PRMS is a semi-distributed watershed model, very similar in character to the well-known SWAT model. The PRMS model is coupled with the MODFLOW model in that deep percolation generated within the PRMS model feeds into the MODFLOW model. The simulated baseflow recessions, plotted as -dQ/dt vs Q, show a strong dependence to watershed topography and plot concave downward. These plots show a somewhat weaker dependence on the hydrologic fluxes of evapotranspiration and recharge, with the concave downward shape maintained but somewhat modified by these hydrologic fluxes. As expected the Qgw vs Sgw relation is markedly hysteretic. The cause for this hysteresis is related to the magnitude of water stored, and also the spatial distribution of water stored in the watershed, with the antecedent storage in upland areas controlling the recession flow in late time, while the valley area dominates the recession flow in the early time. Both the minimum streamflow (Qmin ; the flow at the transition between early time and late time uninterrupted recession) and the intercept (intercept of the regression line fit to the recession data on a log-log scale) show a strong relationship with antecedent streamflows. The minimum streamflow, Qmin, is found to be a valid normalizing parameter for producing a unique normalized -dQ/dt vs. Q relation from data manifesting the effects of hysteresis. It is proposed that this normalized relation can be used to improve the performance of low-dimension dynamic models of watershed hydrology that would otherwise not account for hysteresis in Qgw vs Sgw.
Excess adiposity, inflammation, and iron-deficiency in female adolescents.
Tussing-Humphreys, Lisa M; Liang, Huifang; Nemeth, Elizabeta; Freels, Sally; Braunschweig, Carol A
2009-02-01
Iron deficiency is more prevalent in overweight children and adolescents but the mechanisms that underlie this condition remain unclear. The purpose of this cross-sectional study was to assess the relationship between iron status and excess adiposity, inflammation, menarche, diet, physical activity, and poverty status in female adolescents included in the National Health and Nutrition Examination Survey 2003-2004 dataset. Descriptive and simple comparative statistics (t test, chi(2)) were used to assess differences between normal-weight (5th < or = body mass index [BMI] percentile <85th) and heavier-weight girls (< or = 85th percentile for BMI) for demographic, biochemical, dietary, and physical activity variables. In addition, logistic regression analyses predicting iron deficiency and linear regression predicting serum iron levels were performed. Heavier-weight girls had an increased prevalence of iron deficiency compared to those with normal weight. Dietary iron, age of and time since first menarche, poverty status, and physical activity were similar between the two groups and were not independent predictors of iron deficiency or log serum iron levels. Logistic modeling predicting iron deficiency revealed having a BMI > or = 85th percentile and for each 1 mg/dL increase in C-reactive protein the odds ratio for iron deficiency more than doubled. The best-fit linear model to predict serum iron levels included both serum transferrin receptor and C-reactive protein following log-transformation for normalization of these variables. Findings indicate that heavier-weight female adolescents are at greater risk for iron deficiency and that inflammation stemming from excess adipose tissue contributes to this phenomenon. Food and nutrition professionals should consider elevated BMI as an additional risk factor for iron deficiency in female adolescents.
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
Sun, Lili; Zhou, Liping; Yu, Yu; Lan, Yukun; Li, Zhiliang
2007-01-01
Polychlorinated diphenyl ethers (PCDEs) have received more and more concerns as a group of ubiquitous potential persistent organic pollutants (POPs). By using molecular electronegativity distance vector (MEDV-4), multiple linear regression (MLR) models are developed for sub-cooled liquid vapor pressures (P(L)), n-octanol/water partition coefficients (K(OW)) and sub-cooled liquid water solubilities (S(W,L)) of 209 PCDEs and diphenyl ether. The correlation coefficients (R) and the leave-one-out cross-validation (LOO) correlation coefficients (R(CV)) of all the 6-descriptor models for logP(L), logK(OW) and logS(W,L) are more than 0.98. By using stepwise multiple regression (SMR), the descriptors are selected and the resulting models are 5-descriptor model for logP(L), 4-descriptor model for logK(OW), and 6-descriptor model for logS(W,L), respectively. All these models exhibit excellent estimate capabilities for internal sample set and good predictive capabilities for external samples set. The consistency between observed and estimated/predicted values for logP(L) is the best (R=0.996, R(CV)=0.996), followed by logK(OW) (R=0.992, R(CV)=0.992) and logS(W,L) (R=0.983, R(CV)=0.980). By using MEDV-4 descriptors, the QSPR models can be used for prediction and the model predictions can hence extend the current database of experimental values.
Recovery of forest structure and spectral properties after selective logging in lowland Bolivia.
Broadbent, Eben N; Zarin, Daniel J; Asner, Gregory P; Peña-Claros, Marielos; Cooper, Amanda; Littell, Ramon
2006-06-01
Effective monitoring of selective logging from remotely sensed data requires an understanding of the spatial and temporal thresholds that constrain the utility of those data, as well as the structural and ecological characteristics of forest disturbances that are responsible for those constraints. Here we assess those thresholds and characteristics within the context of selective logging in the Bolivian Amazon. Our study combined field measurements of the spatial and temporal dynamics of felling gaps and skid trails ranging from <1 to 19 months following reduced-impact logging in a forest in lowland Bolivia with remote-sensing measurements from simultaneous monthly ASTER satellite overpasses. A probabilistic spectral mixture model (AutoMCU) was used to derive per-pixel fractional cover estimates of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV), and soil. Results were compared with the normalized difference in vegetation index (NDVI). The forest studied had considerably lower basal area and harvest volumes than logged sites in the Brazilian Amazon where similar remote-sensing analyses have been performed. Nonetheless, individual felling-gap area was positively correlated with canopy openness, percentage liana coverage, rates of vegetation regrowth, and height of remnant NPV. Both liana growth and NPV occurred primarily in the crown zone of the felling gap, whereas exposed soil was limited to the trunk zone of the gap. In felling gaps >400 m2, NDVI, and the PV and NPV fractions, were distinguishable from unlogged forest values for up to six months after logging; felling gaps <400 m2 were distinguishable for up to three months after harvest, but we were entirely unable to distinguish skid trails from our analysis of the spectral data.
Paillet, Frederick L.; Hodges, Richard E.; Corland, Barbara S.
2002-01-01
This report presents and describes geophysical logs for six boreholes in Lariat Gulch, a topographic gulch at the former U.S. Air Force site PJKS in Jefferson County near Denver, Colorado. Geophysical logs include gamma, normal resistivity, fluid-column temperature and resistivity, caliper, televiewer, and heat-pulse flowmeter. These logs were run in two boreholes penetrating only the Fountain Formation of Pennsylvanian and Permian age (logged to depths of about 65 and 570 feet) and in four boreholes (logged to depths of about 342 to 742 feet) penetrating mostly the Fountain Formation and terminating in Precambrian crystalline rock, which underlies the Fountain Formation. Data from the logs were used to identify fractures and bedding planes and to locate the contact between the two formations. The logs indicated few fractures in the boreholes and gave no indication of higher transmissivity in the contact zone between the two formations. Transmissivities for all fractures in each borehole were estimated to be less than 2 feet squared per day.
At what scale should microarray data be analyzed?
Huang, Shuguang; Yeo, Adeline A; Gelbert, Lawrence; Lin, Xi; Nisenbaum, Laura; Bemis, Kerry G
2004-01-01
The hybridization intensities derived from microarray experiments, for example Affymetrix's MAS5 signals, are very often transformed in one way or another before statistical models are fitted. The motivation for performing transformation is usually to satisfy the model assumptions such as normality and homogeneity in variance. Generally speaking, two types of strategies are often applied to microarray data depending on the analysis need: correlation analysis where all the gene intensities on the array are considered simultaneously, and gene-by-gene ANOVA where each gene is analyzed individually. We investigate the distributional properties of the Affymetrix GeneChip signal data under the two scenarios, focusing on the impact of analyzing the data at an inappropriate scale. The Box-Cox type of transformation is first investigated for the strategy of pooling genes. The commonly used log-transformation is particularly applied for comparison purposes. For the scenario where analysis is on a gene-by-gene basis, the model assumptions such as normality are explored. The impact of using a wrong scale is illustrated by log-transformation and quartic-root transformation. When all the genes on the array are considered together, the dependent relationship between the expression and its variation level can be satisfactorily removed by Box-Cox transformation. When genes are analyzed individually, the distributional properties of the intensities are shown to be gene dependent. Derivation and simulation show that some loss of power is incurred when a wrong scale is used, but due to the robustness of the t-test, the loss is acceptable when the fold-change is not very large.
White noise analysis of Phycomyces light growth response system. I. Normal intensity range.
Lipson, E D
1975-01-01
The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
A study of electric field components in shallow water and water half-space models in seabed logging
NASA Astrophysics Data System (ADS)
Rostami, Amir; Soleimani, Hassan; Yahya, Noorhana; Nyamasvisva, Tadiwa Elisha; Rauf, Muhammad
2016-11-01
Seabed logging (SBL) is an electromagnetic (EM) method to detect hydrocarbon (HC) laid beneath the seafloor, which is a development of marine controlled source electromagnetic (CSEM) method. CSEM is a method to show resistivity log of geological layers, transmitting ultra-low frequency EM wave. In SBL a net of receivers, placed on the seafloor, detect reflected and refracted EM wave by layers with different resistivity. Contrast of electrical resistivity of layers impacts on amplitude and phase of the EM wave response. The most indispensable concern in SBL is to detect guided wave via high resistive layer under the seafloor that can be an HC reservoir. Guided wave by HC creates a remarkable difference in received signal when HC reservoir does not exist. While the major contribution of received EM wave in large offset, especially in shallow water environment, is airwave, which is refracted by sea surface due to extremely high resistivity of atmosphere, airwave can affect received guided wave, dramatically. Our objective for this work is to compare HC delineation of tangential and normal components of electric field in shallow water area, using finite element method simulation. Will be reported that, in shallow water environment, minor contribution of air wave in normal component of E field (Ey) versus its major contribution in the tangential component (Ex), causes a considerable contrast on HC delineation of Ey for deeply buried reservoirs (more than 3000 m), while Ex is unable to show different contrasts of received data for with and without HC media at the same condition.
Mixed effects modelling for glass category estimation from glass refractive indices.
Lucy, David; Zadora, Grzegorz
2011-10-10
520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Scaling laws and properties of compositional data
NASA Astrophysics Data System (ADS)
Buccianti, Antonella; Albanese, Stefano; Lima, AnnaMaria; Minolfi, Giulia; De Vivo, Benedetto
2016-04-01
Many random processes occur in geochemistry. Accurate predictions of the manner in which elements or chemical species interact each other are needed to construct models able to treat presence of random components. Geochemical variables actually observed are the consequence of several events, some of which may be poorly defined or imperfectly understood. Variables tend to change with time/space but, despite their complexity, may share specific common traits and it is possible to model them stochastically. Description of the frequency distribution of the geochemical abundances has been an important target of research, attracting attention for at least 100 years, starting with CLARKE (1889) and continued by GOLDSCHMIDT (1933) and WEDEPOHL (1955). However, it was AHRENS (1954a,b) who focussed on the effect of skewness distributions, for example the log-normal distribution, regarded by him as a fundamental law of geochemistry. Although modeling of frequency distributions with some probabilistic models (for example Gaussian, log-normal, Pareto) has been well discussed in several fields of application, little attention has been devoted to the features of compositional data. When compositional nature of data is taken into account, the most typical distribution models for compositions are the Dirichlet and the additive logistic normal (or normal on the simplex) (AITCHISON et al. 2003; MATEU-FIGUERAS et al. 2005; MATEU-FIGUERAS and PAWLOWSKY-GLAHN 2008; MATEU-FIGUERAS et al. 2013). As an alternative, because compositional data have to be transformed from simplex space to real space, coordinates obtained by the ilr transformation or by application of the concept of balance can be analyzed by classical methods (EGOZCUE et al. 2003). In this contribution an approach coherent with the properties of compositional information is proposed and used to investigate the shape of the frequency distribution of compositional data. The purpose is to understand data-generation processes from the perspective of compositional theory. The approach is based on the use of the isometric log-ratio transformation, characterized by theoretical and practical advantages, but requiring a more complex geochemical interpretation compared with the investigation of single variables. The proposed methodology directs attention to model the frequency distributions of more complex indices, linking all the terms of the composition to better represent the dynamics of geochemical processes. An example of its application is presented and discussed by considering topsoil geochemistry of Campania Region (southern Italy). The investigated multi-element data archive contains, among others, Al, As, B, Ba, Ca, Co, Cr, Cu, Fe, K, La, Mg, Mn, Mo, Na, Ni, P, Pb, Sr, Th, Ti, V and Zn (mg/kg) contents determined in 3535 new topsoils as well as information on coordinates, geology, land cover. (BUCCIANTI et al., 2015). AHRENS, L. ,1954a. Geochim. Cosm. Acta 6, 121-131. AHRENS, L., 1954b. Geochim. Cosm. Acta 5, 49-73. AITCHISON, J., et al., 2003. Math Geol 35(6), 667-680. BUCCIANTI et al., 2015. Jour. Geoch. Explor., 159, 302-316. CLARKE, F., 1889. Phil. Society of Washington Bull. 11, 131-142. EGOZCUE, J.J. et al., 2003. Math Geol 35(3), 279-300. MATEU-FIGUERAS, G. et al, (2005), Stoch. Environ. Res. Risk Ass. 19(3), 205-214.
An "ASYMPTOTIC FRACTAL" Approach to the Morphology of Malignant Cell Nuclei
NASA Astrophysics Data System (ADS)
Landini, Gabriel; Rippin, John W.
To investigate quantitatively nuclear membrane irregularity, 672 nuclei from 10 cases of oral cancer (squamous cell carcinoma) and normal cells from oral mucosa were studied in transmission electron micrographs. The nuclei were photographed at ×1400 magnification and transferred to computer memory (1 pixel = 35 nm). The perimeter of the profiles was analysed using the "yardstick method" of fractal dimension estimation, and the log-log plot of ruler size vs. boundary length demonstrated that there exists a significant effect of resolution on length measurement. However, this effect seems to disappear at higher resolutions. As this observation is compatible with the concept of asymptotic fractal, we estimated the parameters c, L and Bm from the asymptotic fractal formula Br = Bm {1 + (r / L)c}-1 , where Br is the boundary length measured with a ruler of size r, Bm is the maximum boundary for r → 0, L is a constant, and c = asymptotic fractal dimension minus topological dimension (D - Dt) for r → ∞. Analyses of variance showed c to be significantly higher in the normal than malignant cases (P < 0.001), but log(L) and Bm to be significantly higher in the malignant cases (P < 0.001). A multivariate linear discrimination analysis on c, log(L) and Bm re-classified 76.6% of the cells correctly (84.8% of the normal and 67.5% of the tumor). Furthermore, this shows that asymptotic fractal analysis applied to nuclear profiles has great potential for shape quantification in diagnosis of oral cancer.
El-Osta, Hazem; Jani, Pushan; Mansour, Ali; Rascoe, Philip; Jafri, Syed
2018-04-23
An accurate assessment of the mediastinal lymph nodes status is essential in the staging and treatment planning of potentially resectable non-small cell lung cancer (NSCLC). We performed this meta-analysis to evaluate the role of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) in detecting occult mediastinal disease in NSCLC with no radiologic mediastinal involvement. The PubMed, Embase, and Cochrane libraries were searched for studies describing the role of EBUS-TBNA in lung cancer patients with radiologically negative mediastinum. The individual and pooled sensitivity, prevalence, negative predictive value (NPV), and diagnostic odds ratio (DOR) were calculated using the random effects model. Metaregression analysis, heterogeneity, and publication bias were also assessed. A total of 13 studies that met the inclusion criteria were included in the meta-analysis. The pooled effect size of the different diagnostic parameters were estimated as follows: prevalence, 12.8% (95% CI, 10.4%-15.7%); sensitivity, 49.5% (95% confidence interval [CI], 36.4%-62.6%); NPV, 93.0% (95% CI, 90.3%-95.0%); and log DOR, 5.069 (95% CI, 4.212-5.925). Significant heterogeneity was noticeable for the sensitivity, disease prevalence, and NPV, but not observed for log DOR. Publication bias was detected for sensitivity, NPV and log DOR but not for prevalence. Bivariate meta-regression analysis showed no significant association between the pooled calculated parameters and the type of anesthesia, imaging utilized to define negative mediastinum, rapid on-site test usage, and presence of bias by QUADAS-2 tool. Interestingly, we observed a greater sensitivity, NPV and log DOR for studies published prior to 2010, and for prospective multicenter studies. Among NSCLC patients with a radiologically normal mediastinum, the prevalence of mediastinal disease is 12.8% and the sensitivity of EBUS-TBNA is 49.5%. Despite the low sensitivity, the resulting NPV of 93.0% for EBUS-TBNA suggests that mediastinal metastasis is uncommon in such patients.
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
Log-Normality and Multifractal Analysis of Flame Surface Statistics
NASA Astrophysics Data System (ADS)
Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.
2013-11-01
The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E
2003-01-01
Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Itoh, R.; Melhem, E. R.
2001-01-01
OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.
Determining inert content in coal dust/rock dust mixture
Sapko, Michael J.; Ward, Jr., Jack A.
1989-01-01
A method and apparatus for determining the inert content of a coal dust and rock dust mixture uses a transparent window pressed against the mixture. An infrared light beam is directed through the window such that a portion of the infrared light beam is reflected from the mixture. The concentration of the reflected light is detected and a signal indicative of the reflected light is generated. A normalized value for the generated signal is determined according to the relationship .phi.=(log i.sub.c `log i.sub.co) / (log i.sub.c100 -log i.sub.co) where i.sub.co =measured signal at 0% rock dust i.sub.c100 =measured signal at 100% rock dust i.sub.c =measured signal of the mixture. This normalized value is then correlated to a predetermined relationship of .phi. to rock dust percentage to determine the rock dust content of the mixture. The rock dust content is displayed where the percentage is between 30 and 100%, and an indication of out-of-range is displayed where the rock dust percent is less than 30%. Preferably, the rock dust percentage (RD%) is calculated from the predetermined relationship RD%=100+30 log .phi.. where the dust mixture initially includes moisture, the dust mixture is dried before measuring by use of 8 to 12 mesh molecular-sieves which are shaken with the dust mixture and subsequently screened from the dust mixture.
Severity of Illness Scores May Misclassify Critically Ill Obese Patients.
Deliberato, Rodrigo Octávio; Ko, Stephanie; Komorowski, Matthieu; Armengol de La Hoz, M A; Frushicheva, Maria P; Raffa, Jesse D; Johnson, Alistair E W; Celi, Leo Anthony; Stone, David J
2018-03-01
Severity of illness scores rest on the assumption that patients have normal physiologic values at baseline and that patients with similar severity of illness scores have the same degree of deviation from their usual state. Prior studies have reported differences in baseline physiology, including laboratory markers, between obese and normal weight individuals, but these differences have not been analyzed in the ICU. We compared deviation from baseline of pertinent ICU laboratory test results between obese and normal weight patients, adjusted for the severity of illness. Retrospective cohort study in a large ICU database. Tertiary teaching hospital. Obese and normal weight patients who had laboratory results documented between 3 days and 1 year prior to hospital admission. None. Seven hundred sixty-nine normal weight patients were compared with 1,258 obese patients. After adjusting for the severity of illness score, age, comorbidity index, baseline laboratory result, and ICU type, the following deviations were found to be statistically significant: WBC 0.80 (95% CI, 0.27-1.33) × 10/L; p = 0.003; log (blood urea nitrogen) 0.01 (95% CI, 0.00-0.02); p = 0.014; log (creatinine) 0.03 (95% CI, 0.02-0.05), p < 0.001; with all deviations higher in obese patients. A logistic regression analysis suggested that after adjusting for age and severity of illness at least one of these deviations had a statistically significant effect on hospital mortality (p = 0.009). Among patients with the same severity of illness score, we detected clinically small but significant deviations in WBC, creatinine, and blood urea nitrogen from baseline in obese compared with normal weight patients. These small deviations are likely to be increasingly important as bigger data are analyzed in increasingly precise ways. Recognition of the extent to which all critically ill patients may deviate from their own baseline may improve the objectivity, precision, and generalizability of ICU mortality prediction and severity adjustment models.
Consequence of reputation in the Sznajd consensus model
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Forgerini, Fabricio L.
2010-07-01
In this work we study a modified version of the Sznajd sociophysics model. In particular we introduce reputation, a mechanism that limits the capacity of persuasion of the agents. The reputation is introduced as a score which is time-dependent, and its introduction avoid dictatorship (all spins parallel) for a wide range of parameters. The relaxation time follows a log-normal-like distribution. In addition, we show that the usual phase transition also occurs, as in the standard model, and it depends on the initial concentration of individuals following an opinion, occurring at a initial density of up spins greater than 1/2. The transition point is determined by means of a finite-size scaling analysis.
Pei Li; Jing He; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper analyses computed tomography (CT) images of hardwood logs, with the goal of locating internal defects. The ability to detect and identify defects automatically is a critical component of efficiency improvements for future sawmills and veneer mills. This paper describes an approach in which 1) histogram equalization is used during preprocessing to normalize...
Monocular oral reading after treatment of dense congenital unilateral cataract
Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.
2010-01-01
Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057
Diaper area skin microflora of normal children and children with atopic dermatitis.
Keswick, B H; Seymour, J L; Milligan, M C
1987-01-01
In vitro studies established that neither cloth nor disposable diapers demonstrably contributed to the growth of Escherichia coli, Proteus vulgaris, Staphylococcus aureus, or Candida albicans when urine was present as a growth medium. In a clinical study of 166 children, the microbial skin flora of children with atopic dermatitis was compared with the flora of children with normal skin to determine the influence of diaper type. No biologically significant differences were detected between groups wearing disposable or cloth diapers in terms of frequency of isolation or log mean recovery of selected skin flora. Repeated isolation of S. aureus correlated with atopic dermatitis. The log mean recovery of S. aureus was higher in the atopic groups. The effects of each diaper type on skin microflora were equivalent in the normal and atopic populations. PMID:3546360
Stick-slip behavior in a continuum-granular experiment.
Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott
2015-12-01
We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.
Psychometric functions for pure-tone frequency discrimination.
Dai, Huanping; Micheyl, Christophe
2011-07-01
The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America
Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea
NASA Astrophysics Data System (ADS)
Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming
2018-02-01
There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.
A common mode of origin of power laws in models of market and earthquake
NASA Astrophysics Data System (ADS)
Bhattacharyya, Pratip; Chatterjee, Arnab; Chakrabarti, Bikas K.
2007-07-01
We show that there is a common mode of origin for the power laws observed in two different models: (i) the Pareto law for the distribution of money among the agents with random-saving propensities in an ideal gas-like market model and (ii) the Gutenberg-Richter law for the distribution of overlaps in a fractal-overlap model for earthquakes. We find that the power laws appear as the asymptotic forms of ever-widening log-normal distributions for the agents’ money and the overlap magnitude, respectively. The identification of the generic origin of the power laws helps in better understanding and in developing generalized views of phenomena in such diverse areas as economics and geophysics.
Evaluation of bacterial run and tumble motility parameters through trajectory analysis
NASA Astrophysics Data System (ADS)
Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash
2018-04-01
In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Log-Multiplicative Association Models as Item Response Models
ERIC Educational Resources Information Center
Anderson, Carolyn J.; Yu, Hsiu-Ting
2007-01-01
Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…
Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages
Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.
2012-01-01
Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726
NASA Astrophysics Data System (ADS)
Moschandreas, D. J.; Kim, Y.; Karuchit, S.; Ari, H.; Lebowitz, M. D.; O'Rourke, M. K.; Gordon, S.; Robertson, G.
One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure distributions of chlorpyrifos and diazinon in the residential microenvironment using the database generated in Arizona (NHEXAS-AZ). A four-stage probability sampling design was used for sample selection. Exposures to pesticides were estimated using the indirect method of exposure calculation by combining measured concentrations of the two pesticides in multiple media with questionnaire information such as time subjects spent indoors, dietary and non-dietary items they consumed, and areas they touched. Most distributions of in-residence exposure to chlorpyrifos and diazinon were log-normal or nearly log-normal. Exposures to chlorpyrifos and diazinon vary by pesticide and route as well as by various demographic characteristics of the subjects. Comparisons of exposure to pesticides were investigated among subgroups of demographic categories, including gender, age, minority status, education, family income, household dwelling type, year the dwelling was built, pesticide use, and carpeted areas within dwellings. Residents with large carpeted areas within their dwellings have higher exposures to both pesticides for all routes than those in less carpet-covered areas. Depending on the route, several other determinants of exposure to pesticides were identified, but a clear pattern could not be established regarding the exposure differences between several subpopulation groups.
The soft X-ray diffuse background observed with the HEAO 1 low-energy detectors
NASA Technical Reports Server (NTRS)
Garmire, G. P.; Nousek, J. A.; Apparao, K. M. V.; Burrows, D. N.; Fink, R. L.; Kraft, R. P.
1992-01-01
Results of a study of the diffuse soft-X-ray background as observed by the low-energy detectors of the A-2 experiment aboard the HEAO 1 satellite are reported. The observed sky intensities are presented as maps of the diffuse X-ray background sky in several energy bands covering the energy range 0.15-2.8 keV. It is found that the soft X-ray diffuse background (SXDB) between 1.5 and 2.8 keV, assuming a power law form with photon number index 1.4, has a normalization constant of 10.5 +/- 1.0 photons/sq cm s sr keV. Below 1.5 keV the spectrum of the SXDB exceeds the extrapolation of this power law. The low-energy excess for the NEP can be fitted with emission from a two-temperature equilibrium plasma model with the temperatures given by log I1 = 6.16 and log T2 = 6.33. It is found that this model is able to account for the spectrum below 1 keV, but fails to yield the observed Galactic latitude variation.
Rapid pupil-based assessment of glaucomatous damage.
Chen, Yanjun; Wyatt, Harry J; Swanson, William H; Dul, Mitchell W
2008-06-01
To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the "contrast balance" (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes.
Rapid Pupil-Based Assessment of Glaucomatous Damage
Chen, Yanjun; Wyatt, Harry J.; Swanson, William H.; Dul, Mitchell W.
2010-01-01
Purpose To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Methods Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the “contrast balance” (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Conclusions Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes. PMID:18521026
Rapid measurement and prediction of bacterial contamination in milk using an oxygen electrode.
Numthuam, Sonthaya; Suzuki, Hiroaki; Fukuda, Junji; Phunsiri, Suthiluk; Rungchang, Saowaluk; Satake, Takaaki
2009-03-01
An oxygen electrode was used to measure oxygen consumption to determine bacterial contamination in milk. Dissolved oxygen (DO) measured at 10-35 degrees C for 2 hours provided a reasonable prediction efficiency (r > or = 0.90) of the amount of bacteria between 1.9 and 7.3 log (CFU/mL). A temperature-dependent predictive model was developed that has the same prediction accuracy like the normal predictive model. The analysis performed with and without stirring provided the same prediction efficiency, with correlation coefficient of 0.90. The measurement of DO is a simple and rapid method for the determination of bacteria in milk.
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Paillet, Frederick; Hite, Laura; Carlson, Matthew
1999-01-01
Time domain surface electromagnetic soundings, borehole induction logs, and other borehole logging techniques are used to construct a realistic model for the shallow subsurface hydraulic properties of unconsolidated sediments in south Florida. Induction logs are used to calibrate surface induction soundings in units of pore water salinity by correlating water sample specific electrical conductivity with the electrical conductivity of the formation over the sampled interval for a two‐layered aquifer model. Geophysical logs are also used to show that a constant conductivity layer model is appropriate for the south Florida study. Several physically independent log measurements are used to quantify the dependence of formation electrical conductivity on such parameters as salinity, permeability, and clay mineral fraction. The combined interpretation of electromagnetic soundings and induction logs was verified by logging three validation boreholes, confirming quantitative estimates of formation conductivity and thickness in the upper model layer, and qualitative estimates of conductivity in the lower model layer.
Ultrasound image filtering using the mutiplicative model
NASA Astrophysics Data System (ADS)
Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan
2002-04-01
Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.
Estimating tree grades for Southern Appalachian natural forest stands
Jeffrey P. Prestemon
1998-01-01
Log prices can vary significantly by grade: grade 1 logs are often several times the price per unit of grade 3 logs. Because tree grading rules derive from log grading rules, a model that predicts tree grades based on tree and stand-level variables might be useful for predicting stand values. The model could then assist in the modeling of timber supply and in economic...
Bhullar, Manreet Singh; Patras, Ankit; Kilanzo-Nthenge, Agnes; Pokharel, Bharat; Yannam, Sudheer Kumar; Rakariyatham, Kanyasiri; Pan, Che; Xiao, Hang; Sasges, Michael
2018-01-01
A continuous-flow UV reactor operating at 254nm wave-length was used to investigate inactivation of microorganisms including bacteriophage in coconut water, a highly opaque liquid food. UV-C inactivation kinetics of two surrogate viruses (MS2, T1UV) and three bacteria (E. coli ATCC 25922, Salmonella Typhimurium ATCC 13311, Listeria monocytogenes ATCC 19115) in buffer and coconut water were investigated (D 10 values ranging from 2.82 to 4.54mJ·cm -2 ). A series of known UV-C doses were delivered to the samples. Inactivation levels of all organisms were linearly proportional to UV-C dose (r 2 >0.97). At the highest dose of 30mJ·cm -2 , the three pathogenic organisms were inactivated by >5 log 10 (p<0.05). Results clearly demonstrated that UV-C irradiation effectively inactivated bacteriophage and pathogenic microbes in coconut water. The inactivation kinetics of microorganisms were best described by log linear model with a low root mean square error (RMSE) and high coefficient of determination (r 2 >0.97). Models for predicting log reduction as a function of UV-C irradiation dose were found to be significant (p<0.05) with low RMSE and high r 2 . The irradiated coconut water showed no cytotoxic effects on normal human intestinal cells and normal mouse liver cells. Overall, these results indicated that UV-C treatment did not generate cytotoxic compounds in the coconut water. This study clearly demonstrated that high levels of inactivation of pathogens can be achieved in coconut water, and suggested potential method for UV-C treatment of other liquid foods. This research paper provides scientific evidence of the potential benefits of UV-C irradiation in inactivating bacterial and viral surrogates at commercially relevant doses of 0-120mJ·cm -2 . The irradiated coconut water showed no cytotoxic effects on normal intestinal and healthy mice liver cells. UV-C irradiation is an attractive food preservation technology and offers opportunities for horticultural and food processing industries to meet the growing demand from consumers for healthier and safe food products. This study would provide technical support for commercialization of UV-C treatment of beverages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimation of Renyi exponents in random cascades
Troutman, Brent M.; Vecchia, Aldo V.
1999-01-01
We consider statistical estimation of the Re??nyi exponent ??(h), which characterizes the scaling behaviour of a singular measure ?? defined on a subset of Rd. The Re??nyi exponent is defined to be lim?????0 [{log M??(h)}/(-log ??)], assuming that this limit exists, where M??(h) = ??i??h(??i) and, for ??>0, {??i} are the cubes of a ??-coordinate mesh that intersect the support of ??. In particular, we demonstrate asymptotic normality of the least-squares estimator of ??(h) when the measure ?? is generated by a particular class of multiplicative random cascades, a result which allows construction of interval estimates and application of hypothesis tests for this scaling exponent. Simulation results illustrating this asymptotic normality are presented. ?? 1999 ISI/BS.
Medium Access Control for Opportunistic Concurrent Transmissions under Shadowing Channels
Son, In Keun; Mao, Shiwen; Hur, Seung Min
2009-01-01
We study the problem of how to alleviate the exposed terminal effect in multi-hop wireless networks in the presence of log-normal shadowing channels. Assuming node location information, we propose an extension of the IEEE 802.11 MAC protocol that sched-ules concurrent transmissions in the presence of log-normal shadowing, thus mitigating the exposed terminal problem and improving network throughput and delay performance. We observe considerable improvements in throughput and delay achieved over the IEEE 802.11 MAC under various network topologies and channel conditions in ns-2 simulations, which justify the importance of considering channel randomness in MAC protocol design for multi-hop wireless networks. PMID:22408556
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Growth models and the expected distribution of fluctuating asymmetry
Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John
2003-01-01
Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.
NASA Astrophysics Data System (ADS)
Doury, Maxime; Dizeux, Alexandre; de Cesare, Alain; Lucidarme, Olivier; Pellot-Barakat, Claire; Bridal, S. Lori; Frouin, Frédérique
2017-02-01
Dynamic contrast-enhanced ultrasound has been proposed to monitor tumor therapy, as a complement to volume measurements. To assess the variability of perfusion parameters in ideal conditions, four consecutive test-retest studies were acquired in a mouse tumor model, using controlled injections. The impact of mathematical modeling on parameter variability was then investigated. Coefficients of variation (CV) of tissue blood volume (BV) and tissue blood flow (BF) based-parameters were estimated inside 32 sub-regions of the tumors, comparing the log-normal (LN) model with a one-compartment model fed by an arterial input function (AIF) and improved by the introduction of a time delay parameter. Relative perfusion parameters were also estimated by normalization of the LN parameters and normalization of the one-compartment parameters estimated with the AIF, using a reference tissue (RT) region. A direct estimation (rRTd) of relative parameters, based on the one-compartment model without using the AIF, was also obtained by using the kinetics inside the RT region. Results of test-retest studies show that absolute regional parameters have high CV, whatever the approach, with median values of about 30% for BV, and 40% for BF. The positive impact of normalization was established, showing a coherent estimation of relative parameters, with reduced CV (about 20% for BV and 30% for BF using the rRTd approach). These values were significantly lower (p < 0.05) than the CV of absolute parameters. The rRTd approach provided the smallest CV and should be preferred for estimating relative perfusion parameters.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Zerara, Mohamed; Brickmann, Jürgen; Kretschmer, Robert; Exner, Thomas E
2009-02-01
Quantitative information of solvation and transfer free energies is often needed for the understanding of many physicochemical processes, e.g the molecular recognition phenomena, the transport and diffusion processes through biological membranes and the tertiary structure of proteins. Recently, a concept for the localization and quantification of hydrophobicity has been introduced (Jäger et al. J Chem Inf Comput Sci 43:237-247, 2003). This model is based on the assumptions that the overall hydrophobicity can be obtained as a superposition of fragment contributions. To date, all predictive models for the logP have been parameterized for n-octanol/water (logP(oct)) solvent while very few models with poor predictive abilities are available for other solvents. In this work, we propose a parameterization of an empirical model for n-octanol/water, alkane/water (logP(alk)) and cyclohexane/water (logP(cyc)) systems. Comparison of both logP(alk) and logP(cyc) with the logarithms of brain/blood ratios (logBB) for a set of structurally diverse compounds revealed a high correlation showing their superiority over the logP(oct) measure in this context.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Binary data corruption due to a Brownian agent
NASA Astrophysics Data System (ADS)
Newman, T. J.; Triampo, Wannapong
1999-05-01
We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Effects of reduced-impact logging on fish assemblages in central Amazonia.
Dias, Murilo S; Magnusson, William E; Zuanon, Jansen
2010-02-01
In Amazonia reduced-impact logging, which is meant to reduce environmental disturbance by controlling stem-fall directions and minimizing construction of access roads, has been applied to large areas containing thousands of streams. We investigated the effects of reduced-impact logging on environmental variables and the composition of fish in forest streams in a commercial logging concession in central Amazonia, Amazonas State, Brazil. To evaluate short-term effects, we sampled 11 streams before and after logging in one harvest area. We evaluated medium-term effects by comparing streams in 11 harvest areas logged 1-8 years before the study with control streams in adjacent areas. Each sampling unit was a 50-m stream section. The tetras Pyrrhulina brevis and Hemigrammus cf. pretoensis had higher abundances in plots logged > or =3 years before compared with plots logged <3 years before. The South American darter (Microcharacidium eleotrioides) was less abundant in logged plots than in control plots. In the short term, the overall fish composition did not differ two months before and immediately after reduced-impact logging. Temperature and pH varied before and after logging, but those differences were compatible with normal seasonal variation. In the medium term, temperature and cover of logs were lower in logged plots. Differences in ordination scores on the basis of relative fish abundance between streams in control and logged areas changed with time since logging, mainly because some common species increased in abundance after logging. There was no evidence of species loss from the logging concession, but differences in log cover and ordination scores derived from relative abundance of fish species persisted even after 8 years. For Amazonian streams, reduced-impact logging appears to be a viable alternative to clear-cut practices, which severely affect aquatic communities. Nevertheless, detailed studies are necessary to evaluated subtle long-term effects.
Fermi/GBM Observations of SGRJ0501 + 4516 Bursts
NASA Technical Reports Server (NTRS)
Lin, Lin; Kouveliotou, Chryssa; Baring, Matthew G.; van der Horst, Alexander J.; Guiriec, Sylvain; Woods, Peter M.; Goegues, Ersin; Kaneko, Yuki; Scargle, Jeffrey; Granot, Jonathan;
2011-01-01
We present our temporal and spectral analyses of 29 bursts from SGRJ0501+4516, detected with the Gamma-ray Burst Monitor onboard the Fermi Gamma-ray Space Telescope during the 13 days of the source activation in 2008 (August 22 to September 3). We find that the T(sub 90) durations of the bursts can be fit with a log-normal distribution with a mean value of approx. 123 ms. We also estimate for the first time event durations of Soft Gamma Repeater (SGR) bursts in photon space (i.e., using their deconvolved spectra) and find that these are very similar to the T(sub 90)s estimated in count space (following a log-normal distribution with a mean value of approx. 124 ms). We fit the time-integrated spectra for each burst and the time-resolved spectra of the five brightest bursts with several models. We find that a single power law with an exponential cutoff model fits all 29 bursts well, while 18 of the events can also be fit with two black body functions. We expand on the physical interpretation of these two models and we compare their parameters and discuss their evolution. We show that the time-integrated and time-resolved spectra reveal that E(sub peak) decreases with energy flux (and fluence) to a minimum of approx. 30 keV at F = 8.7 x 10(exp -6)erg/sq cm/s, increasing steadily afterwards. Two more sources exhibit a similar trend: SGRs J1550 - 5418 and 1806 - 20. The isotropic luminosity, L(sub iso), corresponding to these flux values is roughly similar for all sources (0.4 - l.5 x 10(exp 40) erg/s.
Shin, Jung-Hyun; Eom, Tae-Hoon; Kim, Young-Hoon; Chung, Seung-Yun; Lee, In-Goo; Kim, Jung-Min
2017-07-01
Valproate (VPA) is an antiepileptic drug (AED) used for initial monotherapy in treating childhood absence epilepsy (CAE). EEG might be an alternative approach to explore the effects of AEDs on the central nervous system. We performed a comparative analysis of background EEG activity during VPA treatment by using standardized, low-resolution, brain electromagnetic tomography (sLORETA) to explore the effect of VPA in patients with CAE. In 17 children with CAE, non-parametric statistical analyses using sLORETA were performed to compare the current density distribution of four frequency bands (delta, theta, alpha, and beta) between the untreated and treated condition. Maximum differences in current density were found in the left inferior frontal gyrus for the delta frequency band (log-F-ratio = -1.390, P > 0.05), the left medial frontal gyrus for the theta frequency band (log-F-ratio = -0.940, P > 0.05), the left inferior frontal gyrus for the alpha frequency band (log-F-ratio = -0.590, P > 0.05), and the left anterior cingulate for the beta frequency band (log-F-ratio = -1.318, P > 0.05). However, none of these differences were significant (threshold log-F-ratio = ±1.888, P < 0.01; threshold log-F-ratio = ±1.722, P < 0.05). Because EEG background is accepted as normal in CAE, VPA would not be expected to significantly change abnormal thalamocortical oscillations on a normal EEG background. Therefore, our results agree with currently accepted concepts but are not consistent with findings in some previous studies.
Bowker, Matthew A.; Maestre, Fernando T.
2012-01-01
Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.
Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar
NASA Technical Reports Server (NTRS)
Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.
Variability of the Degassing Flux of 4He as an impact of 4He -Dating of Groundwaters
NASA Astrophysics Data System (ADS)
Torgersen, T.
2009-12-01
4He dating of groundwater is often confounded by an external flux of 4He as the result of a crustal degassing. Estimates of this external flux have been made but what is the impact on estimates of the 4He groundwater age? The existing measures of the 4He flux across the Earth’s solid surface have been evaluated collectively. The time-and-area weighted arithmetic mean (standard deviation) of n=33 4He degassing fluxes is 3.32(±0.45) x 1010 4He atoms m-2s-1. The log normal mean of 271 measures of the flux into Precambrian shield lakes of Canada is 4.57 x 1010atoms 4He m-2s-1 with a variance of */3.9x. The log normal mean of measurements (n=33) of the crustal flux is 3.63 x 1010 4He m-2s-1 with a best estimate one sigma log normal error of */36x based on an assumption of symmetric error bars. (For comparison, the log normal mean heat flow is 62.2 mW m-2 with a log normal variance of */1.8x; the best estimate mean is 65±1.6 Wm-2, Polach et al., 1993). The variance of the continental flux is shown to increase with decreasing time scales (*/ ~106x at 0.5yr) and decreasing space scales (*/ ~106x at 1km) suggesting that the mechanisms of crustal helium transport and degassing contain a high degree of spatial and temporal variability. This best estimate of the mean and variance in the flux of 4He from continents remains approximately equivalent to the radiogenic production rate of 4He in the whole crust. The small degree of variance in the Canadian lake data (n=271), Precambrian terrain, suggests that it may represent a best approximation of “steady state” crustal degassing. Large scale vertical mass transport in continental crust is estimated as scaled values to be of the order 10-5 cm2s-1 for helium (over 2Byr and 40km vertically) vs. 10-2 cm2s-1 for heat. The mass transport rate requires not only release of 4He from the solid phase via fracturing or comminution but also an enhanced rate of mass transport facilitated by some degree of fluid advection (as has been suggested by metamorphic geology) and further imply a separation of heat and mass during transport.
Schlottmann, Jamie L.; Funkhouser, Ron A.
1991-01-01
Chemical analyses of water from eight test holes and geophysical logs for nine test holes drilled in the Central Oklahoma aquifer are presented. The test holes were drilled to investigate local occurrences of potentially toxic, naturally occurring trace substances in ground water. These trace substances include arsenic, chromium, selenium, residual alpha-particle activities, and uranium. Eight of the nine test holes were drilled near wells known to contain large concentrations of one or more of the naturally occurring trace substances. One test hole was drilled in an area known to have only small concentrations of any of the naturally occurring trace substances.Water samples were collected from one to eight individual sandstone layers within each test hole. A total of 28 water samples, including four duplicate samples, were collected. The temperature, pH, specific conductance, alkalinity, and dissolved-oxygen concentrations were measured at the sample site. Laboratory determinations included major ions, nutrients, dissolved organic carbon, and trace elements (aluminum, arsenic, barium, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, copper, iron, lead, lithium, manganese, mercury, molybdenum, nickel, selenium, silver, strontium, vanadium and zinc). Radionuclide activities and stable isotope (5 values also were determined, including: gross-alpha-particle activity, gross-beta-particle activity, radium-226, radium-228, radon-222, uranium-234, uranium-235, uranium-238, total uranium, carbon-13/carbon-12, deuterium/hydrogen-1, oxygen-18/oxygen-16, and sulfur-34/sulfur-32. Additional analyses of arsenic and selenium species are presented for selected samples as well as analyses of density and iodine for two samples, tritium for three samples, and carbon-14 for one sample.Geophysical logs for most test holes include caliper, neutron, gamma-gamma, natural-gamma logs, spontaneous potential, long- and short-normal resistivity, and single-point resistance. Logs for test-hole NOTS 7 do not include long- and short-normal resistivity, spontaneous-potential, or single-point resistivity. Logs for test-hole NOTS 7A include only caliper and natural-gamma logs.
A plastic flow model for the Acquara - Vadoncello landslide in Senerchia, Southern Italy
Savage, W.; Wasowski, J.
2006-01-01
A previously developed model for stress and velocity fields in two-dimensional Coulomb plastic materials under self-weight and pore pressure predicts that long, shallow landslides develop slip surfaces that manifest themselves as normal faults and normal fault scarps at the surface in areas of extending flow and as thrust faults and thrust fault scarps at the surface in areas of compressive flow. We have applied this model to describe the geometry of slip surfaces and ground stresses developed during the 1995 reactivation of the Acquara - Vadoncello landslide in Senerchia, southern Italy. This landslide is a long and shallow slide in which regions of compressive and extending flow are clearly identified. Slip surfaces in the main scarp region of the landslide have been reconstructed using surface surveys and subsurface borehole logging and inclinometer observations made during retrogression of the main scarp. Two of the four inferred main scarp slip surfaces are best constrained by field data. Slip surfaces in the toe region are reconstructed in the same way and three of the five inferred slip surfaces are similarly constrained. The location of the basal shear surface of the landslide is inferred from borehole logging and borehole inclinometry. Extensive data on material properties, landslide geometries, and pore pressures collected for the Acquara - Vadoncello landslide give values for cohesion, friction angle, and unit weight, plus average basal shear-surface slopes, and pore-pressures required for modelling slip surfaces and stress fields. Results obtained from the landslide-flow model and the field data show that predicted slip surface shapes are consistent with inferred slip surface shapes in both the extending flow main scarp region and in the compressive flow toe region of the Acquara - Vadoncello landslide. Also predicted stress distributions are found to explain deformation features seen in the toe and main scarp regions of the landslide. ?? 2005 Elsevier B.V. All rights reserved.
Kennedy, Paula L; Woodbury, Allan D
2002-01-01
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.
Provinciali, Mauro; Cirioni, Oscar; Orlando, Fiorenza; Pierpaoli, Elisa; Barucca, Alessandra; Silvestri, Carmela; Ghiselli, Roberto; Scalise, Alessandro; Brescini, Lucia; Guerrieri, Mario; Giacometti, Andrea
2011-12-01
A relevant bacterial load in cutaneous wounds significantly interferes with the normal process of healing. Vitamin E (VE) is a known immunomodulator and immune enhancer. Here, it was shown that administration of VE before infection was effective at increasing the antimicrobial activity of daptomycin (DAP) or tigecycline (TIG) in a mouse model of wound infection caused by meticillin-resistant Staphylococcus aureus (MRSA). A wound was established through the panniculus carnosus of mice and inoculated with MRSA. Mice were assigned to six groups: a VE pre-treated group with no antibiotics given after MRSA challenge; two VE pre-treated groups with DAP or TIG given after MRSA challenge; two groups treated with DAP or TIG only after MRSA challenge; and a control group that did not receive any treatment. Mice receiving each antibiotic alone showed a 3 log decrease in the number of c.f.u. recovered compared with the control group, mice treated with VE plus TIG had a 4 log decrease, whilst mice treated with VE plus DAP had the largest decrease in c.f.u. recovered (5 logs). The increased antimicrobial effect seen from treatment with VE plus antibiotics was associated with increased levels of natural killer cell cytotoxicity, with a more pronounced increase in leukocyte populations in mice treated with VE plus DAP. These data suggest that treatment with VE prior to infection and subsequent antibiotic treatment act in synergy. © 2011 SGM
Incidence of Russian log export tax: A vertical log-lumber model
Ying Lin; Daowei Zhang
2017-01-01
In 2007, Russia imposed an ad valorem tax on its log exports that lasted until 2012. In this paper, weuse a Muth-type equilibrium displacement model to investigate the market and welfare impacts of this tax, utilizing a vertical linkage between log and lumber markets and considering factor substitution. Our theoretical analysis indicates...
Predicting durations of online collective actions based on Peaks' heights
NASA Astrophysics Data System (ADS)
Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo
2018-02-01
Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.
Agnostic stacking of intergalactic doublet absorption: measuring the Ne VIII population
NASA Astrophysics Data System (ADS)
Frank, Stephan; Pieri, Matthew M.; Mathur, Smita; Danforth, Charles W.; Shull, J. Michael
2018-05-01
We present a blind search for doublet intergalactic metal absorption with a method dubbed `agnostic stacking'. Using a forward-modelling framework, we combine this with direct detections in the literature to measure the overall metal population. We apply this novel approach to the search for Ne VIII absorption in a set of 26 high-quality COS spectra. We probe to an unprecedented low limit of log N>12.3 at 0.47≤z ≤1.34 over a path-length Δz = 7.36. This method selects apparent absorption without requiring knowledge of its source. Stacking this mixed population dilutes doublet features in composite spectra in a deterministic manner, allowing us to measure the proportion corresponding to Ne VIII absorption. We stack potential Ne VIII absorption in two regimes: absorption too weak to be significant in direct line studies (12.3 < log N < 13.7), and strong absorbers (log N > 13.7). We do not detect Ne VIII absorption in either regime. Combining our measurements with direct detections, we find that the Ne VIII population is reproduced with a power-law column density distribution function with slope β = -1.86 ^{+0.18 }_{ -0.26} and normalization log f_{13.7} = -13.99 ^{+0.20 }_{ -0.23}, leading to an incidence rate of strong Ne VIII absorbers dn/dz =1.38 ^{+0.97 }_{ -0.82}. We infer a cosmic mass density for Ne VIII gas with 12.3 < log N < 15.0 of Ω _{{{Ne {VIII}}}} = 2.2 ^{+1.6 }_{ _-1.2} × 10^{-8}, a value significantly lower that than predicted by recent simulations. We translate this density into an estimate of the baryon density Ωb ≈ 1.8 × 10-3, constituting 4 per cent of the total baryonic mass.
Nasal bone length in human fetuses by X-ray.
Moura, Felipe Nobre; Fernandes, Pablo Lourenco; de Oliveira Silva-Junior, Geraldo; Gomes de Souza, Margareth Maria; Mandarim-de-Lacerda, Carlos Alberto
2008-07-01
To construct a normal range for the prenatal nasal bone length (NBL) in Brazilians irrespective to the knowledge of the ethnic genetic background. We studied 35 human fetuses (20 males, 15 females) ranging from 14 to 22 weeks of gestation. Gestational age (GA), crown-rump length (CRL), foot length (FL) and body mass (BM) were measured. The X-ray of the head lateral view was made with the specimens placed directly on the film and the NBL was measured. The NBL was correlated with the GA, the CRL, the FL, and the BM using log-transformed data and the allometric model log y=log a+b log x. Correlations of the NBL growth with GA, CRL, FL, and BM were positive and significant (P<0.05), but NBL vs. BM showed the smallest R indicating this correlation as of little practical use. No sexual dimorphism in the NBL growth in the second trimester fetuses was observed. The NBL grew with positive allometry relative to GA, CRL and BM, but it was allometrically slightly negative relative to the FL in both genders. The NBL be allometrically positive against GA, CRL and BM means the bone grew with growth rates higher than those indices in the period analyzed, but not against FL. NBL could be considered an auxiliary measurement in the assessment of the 2nd trimester fetal development because its strong correlation with GA, CRL and FL, even when nothing is known about the ethnicity of the population.
Improving Models for Coseismic And Postseismic Deformation from the 2002 Denali, Alaska Earthquake
NASA Astrophysics Data System (ADS)
Harper, H.; Freymueller, J. T.
2016-12-01
Given the multi-decadal temporal scale of postseismic deformation, predictions of previous models for postseismic deformation resulting from the 2002 Denali Fault earthquake (M 7.9) do not agree with longer-term observations. In revising the past postseismic models with what is now over a decade of data, the first step is revisiting coseismic displacements and slip distribution of the earthquake. Advances in processing allow us to better constrain coseismic displacement estimates, which affect slip distribution predictions in modeling. Additionally, an updated slip model structure from a homogeneous model to a layered model rectifies previous inconsistencies between coseismic and postseismic models. Previous studies have shown that two primary processes contribute to postseismic deformation: afterslip, which decays with a short time constant; and viscoelastic relaxation, which decays with a longer time constant. We fit continuous postseismic GPS time series with three different relaxation models: 1) logarithmic decay + exponential decay, 2) log + exp + exp, and 3) log + log + exp. A grid search is used to minimize total model WRSS, and we find optimal relaxation times of: 1) 0.125 years (log) and 21.67 years (exp); 2) 0.14 years (log), 0.68 years (exp), and 28.33 years (exp); 3) 0.055 years (log), 14.44 years (log), and 22.22 years (exp). While there is not a one-to-one correspondence between a particular decay constant and a mechanism, the optimization of these constants allows us to model the future timeseries and constrain the contribution of different postseismic processes.
Daily magnesium intake and serum magnesium concentration among Japanese people.
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. The mean (+/-standard deviation) daily magnesium intake was 322 (+/-132), 323 (+/-163), and 322 (+/-147) mg/day for men, women, and the entire group, respectively. The mean (+/-standard deviation) serum magnesium concentration was 20.69 (+/-2.83), 20.69 (+/-2.88), and 20.69 (+/-2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log(10)X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed.
Two-component mixture model: Application to palm oil and exchange rate
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-12-01
Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.
Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe
2013-02-01
The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).
Green lumber grade yields from factory grade logs of three oak species
Daniel A. Yaussy
1986-01-01
Multivariate regression models were developed to predict green board foot yields for the seven common factory lumber grades processed from white, black, and chestnut oak factory grade logs. These models use the standard log measurements of grade, scaling diameter, log length, and proportion of scaling defect. Any combination of lumber grades (such as 1 Common and...
Development of a 3D log sawing optimization system for small sawmills in central Appalachia, US
Wenshu Lin; Jingxin Wang; Edward Thomas
2011-01-01
A 3D log sawing optimization system was developed to perform log generation, opening face determination, sawing simulation, and lumber grading using 3D modeling techniques. Heuristic and dynamic programming algorithms were used to determine opening face and grade sawing optimization. Positions and shapes of internal log defects were predicted using a model developed by...
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.
Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent
2018-02-01
The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hickman, Stephen; Barton, Colleen; Zoback, Mark; Morin, Roger; Sass, John; Benoit, Richard; ,
1997-01-01
As part of a study relating fractured rock hydrology to in-situ stress and recent deformation within the Dixie Valley Geothermal Field, borehole televiewer logging and hydraulic fracturing stress measurements were conducted in a 2.7-km-deep geothermal production well (73B-7) drilled into the Stillwater fault zone. Borehole televiewer logs from well 73B-7 show numerous drilling-induced tensile fractures, indicating that the direction of the minimum horizontal principal stress, Shmin, is S57 ??E. As the Stillwater fault at this location dips S50 ??E at approximately 3??, it is nearly at the optimal orientation for normal faulting in the current stress field. Analysis of the hydraulic fracturing data shows that the magnitude of Shmin is 24.1 and 25.9 MPa at 1.7 and 2.5 km, respectively. In addition, analysis of a hydraulic fracturing test from a shallow well 1.5 km northeast of 73B-7 indicates that the magnitude of Shmin is 5.6 MPa at 0.4 km depth. Coulomb failure analysis shows that the magnitude of Shmin in these wells is close to that predicted for incipient normal faulting on the Stillwater and subparallel faults, using coefficients of friction of 0.6-1.0 and estimates of the in-situ fluid pressure and overburden stress. Spinner flowmeter and temperature logs were also acquired in well 73B-7 and were used to identify hydraulically conductive fractures. Comparison of these stress and hydrologic data with fracture orientations from the televiewer log indicates that hydraulically conductive fractures within and adjacent to the Stillwater fault zone are critically stressed, potentially active normal faults in the current west-northwest extensional stress regime at Dixie Valley.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagerloef, Jakob H.; Kindblom, Jon; Bernhardt, Peter
Purpose: Formation of new blood vessels (angiogenesis) in response to hypoxia is a fundamental event in the process of tumor growth and metastatic dissemination. However, abnormalities in tumor neovasculature often induce increased interstitial pressure (IP) and further reduce oxygenation (pO{sub 2}) of tumor cells. In radiotherapy, well-oxygenated tumors favor treatment. Antiangiogenic drugs may lower IP in the tumor, improving perfusion, pO{sub 2} and drug uptake, by reducing the number of malfunctioning vessels in the tissue. This study aims to create a model for quantifying the effects of altered pO{sub 2}-distribution due to antiangiogenic treatment in combination with radionuclide therapy. Methods:more » Based on experimental data, describing the effects of antiangiogenic agents on oxygenation of GlioblastomaMultiforme (GBM), a single cell based 3D model, including 10{sup 10} tumor cells, was developed, showing how radionuclide therapy response improves as tumor oxygenation approaches normal tissue levels. The nuclides studied were {sup 90}Y, {sup 131}I, {sup 177}Lu, and {sup 211}At. The absorbed dose levels required for a tumor control probability (TCP) of 0.990 are compared for three different log-normal pO{sub 2}-distributions: {mu}{sub 1} = 2.483, {sigma}{sub 1} = 0.711; {mu}{sub 2} = 2.946, {sigma}{sub 2} = 0.689; {mu}{sub 3} = 3.689, and {sigma}{sub 3} = 0.330. The normal tissue absorbed doses will, in turn, depend on this. These distributions were chosen to represent the expected oxygen levels in an untreated hypoxic tumor, a hypoxic tumor treated with an anti-VEGF agent, and in normal, fully-oxygenated tissue, respectively. The former two are fitted to experimental data. The geometric oxygen distributions are simulated using two different patterns: one Monte Carlo based and one radially increasing, while keeping the log-normal volumetric distributions intact. Oxygen and activity are distributed, according to the same pattern. Results: As tumor pO{sub 2} approaches normal tissue levels, the therapeutic effect is improved so that the normal tissue absorbed doses can be decreased by more than 95%, while retaining TCP, in the most favorable scenario and by up to about 80% with oxygen levels previously achieved in vivo, when the least favourable oxygenation case is used as starting point. The major difference occurs in poorly oxygenated cells. This is also where the pO{sub 2}-dependence of the oxygen enhancement ratio is maximal. Conclusions: Improved tumor oxygenation together with increased radionuclide uptake show great potential for optimising treatment strategies, leaving room for successive treatments, or lowering absorbed dose to normal tissues, due to increased tumor response. Further studies of the concomitant use of antiangiogenic drugs and radionuclide therapy therefore appear merited.« less
Acoustic sorting models for improved log segregation
Xiping Wang; Steve Verrill; Eini Lowell; Robert J. Ross; Vicki L. Herian
2013-01-01
In this study, we examined three individual log measures (acoustic velocity, log diameter, and log vertical position in a tree) for their ability to predict average modulus of elasticity (MOE) and grade yield of structural lumber obtained from Douglas-fir (Pseudotsuga menziesii [Mirb. Franco]) logs. We found that log acoustic velocity only had a...
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
NASA Astrophysics Data System (ADS)
Yegireddi, Satyanarayana; Uday Bhaskar, G.
2009-01-01
Different parameters obtained through well-logging geophysical sensors such as SP, resistivity, gamma-gamma, neutron, natural gamma and acoustic, help in identification of strata and estimation of the physical, electrical and acoustical properties of the subsurface lithology. Strong and conspicuous changes in some of the log parameters associated with any particular stratigraphy formation, are function of its composition, physical properties and help in classification. However some substrata show moderate values in respective log parameters and make difficult to identify or assess the type of strata, if we go by the standard variability ranges of any log parameters and visual inspection. The complexity increases further with more number of sensors involved. An attempt is made to identify the type of stratigraphy from borehole geophysical log data using a combined approach of neural networks and fuzzy logic, known as Adaptive Neuro-Fuzzy Inference System. A model is built based on a few data sets (geophysical logs) of known stratigraphy of in coal areas of Kothagudem, Godavari basin and further the network model is used as test model to infer the lithology of a borehole from their geophysical logs, not used in simulation. The results are very encouraging and the model is able to decipher even thin cola seams and other strata from borehole geophysical logs. The model can be further modified to assess the physical properties of the strata, if the corresponding ground truth is made available for simulation.
Mechanism-based model for tumor drug resistance.
Kuczek, T; Chan, T C
1992-01-01
The development of tumor resistance to cytotoxic agents has important implications in the treatment of cancer. If supported by experimental data, mathematical models of resistance can provide useful information on the underlying mechanisms and aid in the design of therapeutic regimens. We report on the development of a model of tumor-growth kinetics based on the assumption that the rates of cell growth in a tumor are normally distributed. We further assumed that the growth rate of each cell is proportional to its rate of total pyrimidine synthesis (de novo plus salvage). Using an ovarian carcinoma cell line (2008) and resistant variants selected for chronic exposure to a pyrimidine antimetabolite, N-phosphonacetyl-L-aspartate (PALA), we derived a simple and specific analytical form describing the growth curves generated in 72 h growth assays. The model assumes that the rate of de novo pyrimidine synthesis, denoted alpha, is shifted down by an amount proportional to the log10 PALA concentration and that cells whose rate of pyrimidine synthesis falls below a critical level, denoted alpha 0, can no longer grow. This is described by the equation: Probability (growth) = probability (alpha 0 less than alpha-constant x log10 [PALA]). This model predicts that when growth curves are plotted on probit paper, they will produce straight lines. This prediction is in agreement with the data we obtained for the 2008 cells. Another prediction of this model is that the same probit plots for the resistant variants should shift to the right in a parallel fashion. Probit plots of the dose-response data obtained for each resistant 2008 line following chronic exposure to PALA again confirmed this prediction. Correlation of the rightward shift of dose responses to uridine transport (r = 0.99) also suggests that salvage metabolism plays a key role in tumor-cell resistance to PALA. Furthermore, the slope of the regression lines enables the detection of synergy such as that observed between dipyridamole and PALA. Although the rate-normal model was used to study the rate of salvage metabolism in PALA resistance in the present study, it may be widely applicable to modeling of other resistance mechanisms such as gene amplification of target enzymes.
NASA Astrophysics Data System (ADS)
Bartiko, Daniel; Chaffe, Pedro; Bonumá, Nadia
2017-04-01
Floods may be strongly affected by climate, land-use, land-cover and water infrastructure changes. However, it is common to model this process as stationary. This approach has been questioned, especially when it involves estimate of the frequency and magnitude of extreme events for designing and maintaining hydraulic structures, as those responsible for flood control and dams safety. Brazil is the third largest producer of hydroelectricity in the world and many of the country's dams are located in the Southern Region. So, it seems appropriate to investigate the presence of non-stationarity in the affluence in these plants. In our study, we used historical flood data from the Brazilian National Grid Operator (ONS) to explore trends in annual maxima in river flow of the 38 main rivers flowing to Southern Brazilian reservoirs (records range from 43 to 84 years). In the analysis, we assumed a two-parameter log-normal distribution a linear regression model was applied in order to allow for the mean to vary with time. We computed recurrence reduction factors to characterize changes in the return period of an initially estimated 100 year-flood by a log-normal stationary model. To evaluate whether or not a particular site exhibits positive trend, we only considered data series with linear regression slope coefficients that exhibit significance levels (p<0,05). The significance level was calculated using the one-sided Student's test. The trend model residuals were analyzed using the Anderson-Darling normality test, the Durbin-Watson test for the independence and the Breusch-Pagan test for heteroscedasticity. Our results showed that 22 of the 38 data series analyzed have a significant positive trend. The trends were mainly in three large basins: Iguazu, Uruguay and Paranapanema, which suffered changes in land use and flow regularization in the last years. The calculated return period for the series that presented positive trend varied from 50 to 77 years for a 100 year-flood estimated by stationary model when considering a planning horizon equal to ten years. We conclude that attention should be given for future projects developed in this area, including the incorporation of non-stationarity analysis, search for answers to such changes and incorporation of new data to increase the reliability of the estimates.
Celsie, Alena; Parnis, J Mark; Mackay, Donald
2016-03-01
The effects of temperature, pH, and salinity change on naphthenic acids (NAs) present in oil-sands process wastewater were modeled for 55 representative NAs. COSMO-RS was used to estimate octanol-water (KOW) and octanol-air (KOA) partition ratios and Henry's law constants (H). Validation with experimental carboxylic acid data yielded log KOW and log H RMS errors of 0.45 and 0.55 respectively. Calculations of log KOW, (or log D, for pH-dependence), log KOA and log H (or log HD, for pH-dependence) were made for model NAs between -20 °C and 40 °C, pH between 0 and 14, and salinity between 0 and 3 g NaCl L(-1). Temperature increase by 60 °C resulted in 3-5 log unit increase in H and a similar magnitude decrease in KOA. pH increase above the NA pKa resulted in a dramatic decrease in both log D and log HD. Salinity increase over the 0-3 g NaCl L(-1) range resulted in a 0.3 log unit increase on average for KOW and H values. Log KOW values of the sodium salt and anion of the conjugate base were also estimated to examine their potential for contribution to the overall partitioning of NAs. Sodium salts and anions of naphthenic acids are predicted to have on average 4 log units and 6 log units lower log KOW values, respectively, with respect to the corresponding neutral NA. Partitioning properties are profoundly influenced by the by the relative prevailing pH and the substance's pKa at the relevant temperature. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kellogg, James A.; Atria, Peter V.; Sanders, Jeffrey C.; Eyster, M. Elaine
2001-01-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at −70°C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at ≤−70°C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of ≥1,000 copies/ml at CPAL, 148 (72%) of the results varied by ≤0.20 log10 when tested at Hershey and none varied by >0.50 log10. However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log10 when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log10 while the maximum interrun variation was 0.52 log10. High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log10, while the maximum interrun variation was 0.55 log10. In our patient population, a change in bDNA HIV-1 RNA results of ≤0.50 log10 over time most likely represents normal laboratory test variation. However, a change of >0.50 log10, especially if the results are >1,000 copies/ml, is likely to be significant. PMID:11329458
Kellogg, J A; Atria, P V; Sanders, J C; Eyster, M E
2001-05-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at -70 degrees C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at < or =-70 degrees C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of > or =1,000 copies/ml at CPAL, 148 (72%) of the results varied by < or =0.20 log(10) when tested at Hershey and none varied by >0.50 log(10). However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log(10) when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log(10) while the maximum interrun variation was 0.52 log(10). High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log(10), while the maximum interrun variation was 0.55 log(10). In our patient population, a change in bDNA HIV-1 RNA results of < or =0.50 log(10) over time most likely represents normal laboratory test variation. However, a change of >0.50 log(10), especially if the results are >1,000 copies/ml, is likely to be significant.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
Double stars with wide separations in the AGK3 - II. The wide binaries and the multiple systems*
NASA Astrophysics Data System (ADS)
Halbwachs, J.-L.; Mayor, M.; Udry, S.
2017-02-01
A large observation programme was carried out to measure the radial velocities of the components of a selection of common proper motion (CPM) stars to select the physical binaries. 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. By adding CPM stars with separations close enough to be almost certain that they are physical, a bias-controlled sample of 116 WBs was obtained, and used to derive the distribution of separations from 100 to 30 000 au. The distribution obtained does not match the log-constant distribution, but agrees with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical information about the multiple systems. The close binaries in WBs seem to be like those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems agrees with the no correlation hypothesis; this indicates that an environment conducive to the formation of WBs does not favour the formation of subsystems with periods shorter than 10 yr.
Davatzes, Nicholas C.; Hickman, Stephen H.
2009-01-01
A suite of geophysical logs has been acquired for structural, fluid flow and stress analysis of well 27-15 in the Desert Peak Geothermal Field, Nevada, in preparation for stimulation and development of an Enhanced Geothermal System (EGS). Advanced Logic Technologies Borehole Televiewer (BHTV) and Schlumberger Formation MicroScanner (FMS) image logs reveal extensive drilling-induced tensile fractures, showing that the current minimum compressive horizontal stress, Shmin, in the vicinity of well 27-15 is oriented along an azimuth of 114±17°. This orientation is consistent with the dip direction of recently active normal faults mapped at the surface and with extensive sets of fractures and some formation boundaries seen in the BHTV and FMS logs. Temperature and spinner flowmeter surveys reveal several minor flowing fractures that are well oriented for normal slip, although over-all permeability in the well is quite low. These results indicate that well 27-15 is a viable candidate for EGS stimulation and complements research by other investigators including cuttings analysis, a reflection seismic survey, pressure transient and tracer testing, and micro-seismic monitoring.
NASA Astrophysics Data System (ADS)
Nobert, Joel; Mugo, Margaret; Gadain, Hussein
Reliable estimation of flood magnitudes corresponding to required return periods, vital for structural design purposes, is impacted by lack of hydrological data in the study area of Lake Victoria Basin in Kenya. Use of regional information, derived from data at gauged sites and regionalized for use at any location within a homogenous region, would improve the reliability of the design flood estimation. Therefore, the regional index flood method has been applied. Based on data from 14 gauged sites, a delineation of the basin into two homogenous regions was achieved using elevation variation (90-m DEM), spatial annual rainfall pattern and Principal Component Analysis of seasonal rainfall patterns (from 94 rainfall stations). At site annual maximum series were modelled using the Log normal (LN) (3P), Log Logistic Distribution (LLG), Generalized Extreme Value (GEV) and Log Pearson Type 3 (LP3) distributions. The parameters of the distributions were estimated using the method of probability weighted moments. Goodness of fit tests were applied and the GEV was identified as the most appropriate model for each site. Based on the GEV model, flood quantiles were estimated and regional frequency curves derived from the averaged at site growth curves. Using the least squares regression method, relationships were developed between the index flood, which is defined as the Mean Annual Flood (MAF) and catchment characteristics. The relationships indicated area, mean annual rainfall and altitude were the three significant variables that greatly influence the index flood. Thereafter, estimates of flood magnitudes in ungauged catchments within a homogenous region were estimated from the derived equations for index flood and quantiles from the regional curves. These estimates will improve flood risk estimation and to support water management and engineering decisions and actions.
Measuring colour rivalry suppression in amblyopia.
Hofeldt, T S; Hofeldt, A J
1999-11-01
To determine if the colour rivalry suppression is an index of the visual impairment in amblyopia and if the stereopsis and fusion evaluator (SAFE) instrument is a reliable indicator of the difference in visual input from the two eyes. To test the accuracy of the SAFE instrument for measuring the visual input from the two eyes, colour rivalry suppression was measured in six normal subjects. A test neutral density filter (NDF) was placed before one eye to induce a temporary relative afferent defect and the subject selected the NDF before the fellow eye to neutralise the test NDF. In a non-paediatric private practice, 24 consecutive patients diagnosed with unilateral amblyopia were tested with the SAFE. Of the 24 amblyopes, 14 qualified for the study because they were able to fuse images and had no comorbid disease. The relation between depth of colour rivalry suppression, stereoacuity, and interocular difference in logMAR acuity was analysed. In normal subjects, the SAFE instrument reversed temporary defects of 0.3 to 1. 8 log units to within 0.6 log units. In amblyopes, the NDF to reverse colour rivalry suppression was positively related to interocular difference in logMAR acuity (beta=1.21, p<0.0001), and negatively related to stereoacuity (beta=-0.16, p=0.019). The interocular difference in logMAR acuity was negatively related to stereoacuity (beta=-0.13, p=0.009). Colour rivalry suppression as measured with the SAFE was found to agree closely with the degree of visual acuity impairment in non-paediatric patients with amblyopia.
Effect of stimulus configuration on crowding in strabismic amblyopia.
Norgett, Yvonne; Siderov, John
2017-11-01
Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.
Log-amplitude statistics for Beck-Cohen superstatistics
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Konno, Hidetoshi
2013-05-01
As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.
Pretreatment with intravenous lipid emulsion reduces mortality from cocaine toxicity in a rat model.
Carreiro, Stephanie; Blum, Jared; Hack, Jason B
2014-07-01
We compare the effects of intravenous lipid emulsion and normal saline solution pretreatment on mortality and hemodynamic changes in a rat model of cocaine toxicity. We hypothesize that intravenous lipid emulsion will decrease mortality and hemodynamic changes caused by cocaine administration compared with saline solution. Twenty male Sprague-Dawley rats were sedated and randomized to receive intravenous lipid emulsion or normal saline solution, followed by a 10 mg/kg bolus of intravenous cocaine. Continuous monitoring included intra-arterial blood pressure, pulse rate and ECG tracing. Endpoints included a sustained undetectable mean arterial pressure (MAP) or return to baseline MAP for 5 minutes. The log-rank test was used to compare mortality. A mixed-effect repeated-measures ANOVA was used to estimate the effects of group (intravenous lipid emulsion versus saline solution), time, and survival on change in MAP, pulse rate, or pulse pressure. In the normal saline solution group, 7 of 10 animals died compared with 2 of 10 in the intravenous lipid emulsion group. The survival rate of 80% (95% confidence interval 55% to 100%) for the intravenous lipid emulsion rats and 30% (95% confidence interval 0.2% to 58%) for the normal saline solution group was statistically significant (P=.045). Intravenous lipid emulsion pretreatment decreased cocaine-induced cardiovascular collapse and blunted hypotensive effects compared with normal saline solution in this rat model of acute lethal cocaine intoxication. Intravenous lipid emulsion should be investigated further as a potential adjunct in the treatment of severe cocaine toxicity. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Workload Characterization and Performance Implications of Large-Scale Blog Servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho
With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less
Suárez-Ortegón, M F; Arbeláez, A; Mosquera, M; Méndez, F; Aguilar-de Plata, C
2012-08-01
Ferritin levels have been associated with metabolic syndrome and insulin resistance. The aim of the present study was to evaluate the prediction of ferritin levels by variables related to cardiometabolic disease risk in a multivariate analysis. For this aim, 123 healthy women (72 premenopausal and 51 posmenopausal) were recruited. Data were collected through procedures of anthropometric measurements, questionnaires for personal/familial antecedents, and dietary intake (24-h recall), and biochemical determinations (ferritin, C reactive protein (CRP), glucose, insulin, and lipid profile) in blood serum samples obtained. Multiple linear regression analysis was used and variables with no normal distribution were log-transformed for this analysis. In premenopausal women, a model to explain log-ferritin levels was found with log-CRP levels, heart attack familial history, and waist circumference as independent predictors. Ferritin behaves as other cardiovascular markers in terms of prediction of its levels by documented predictors of cardiometabolic disease and related disorders. This is the first report of a relationship between heart attack familial history and ferritin levels. Further research is required to evaluate the mechanism to explain the relationship of central body fat and heart attack familial history with body iron stores values.
Gidáli, J; Szamosvölgyi, S; Fehér, I; Kovács, P
1990-01-01
The effect of hyperthermia in vitro on the survival and leukaemogenic effectiveness of WEHI 3-B cells and on the survival and transplantation efficiency of bone marrow cells was compared in a murine model system. Normal murine clonogenic haemopoietic cells (day 9 CFU-S and CFU-GM) proved to be significantly less sensitive to 42.5 degrees C hyperthermia (Do values: 54.3 and 41.1 min, respectively) than leukaemic clonogenic cells (CFU-L) derived from suspension culture or from bone marrow of leukaemic mice (Do: 17.8 min). Exposure for 120 min to 42.5 degrees C reduced the surviving fraction of CFU-L to 0.002 and that of CFU-S to 0.2. If comparable graft sizes were transplanted from normal or heat exposed bone marrow, 60-day survival of supralethally irradiated mice was similar. Surviving WEHI 3-B cells were capable of inducing leukaemia in vivo. The two log difference in the surviving fraction of CFU-L and CFU-S after 120 min exposure to 42.5 degrees C suggests that hyperthermia ex vivo may be a suitable purging method for autologous bone marrow transplantation.
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Fyodorov, Yan V.; Le Doussal, Pierre
2018-02-01
We address systematically an apparent nonphysical behavior of the free-energy moment generating function for several instances of the logarithmically correlated models: the fractional Brownian motion with Hurst index H =0 (fBm0) (and its bridge version), a one-dimensional model appearing in decaying Burgers turbulence with log-correlated initial conditions and, finally, the two-dimensional log-correlated random-energy model (logREM) introduced in Cao et al. [Phys. Rev. Lett. 118, 090601 (2017), 10.1103/PhysRevLett.118.090601] based on the two-dimensional Gaussian free field with background charges and directly related to the Liouville field theory. All these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. We argue that a seemingly nonphysical vanishing of the moment generating function for some values of parameters is related to the termination point transition (i.e., prefreezing). We study the associated universal log corrections in the frozen phase, both for logREMs and for the standard REM, filling a gap in the literature. For the above mentioned integrable instances of logREMs, we predict the nontrivial free-energy cumulants describing non-Gaussian fluctuations on the top of the Gaussian with extensive variance. Some of the predictions are tested numerically.
NASA Astrophysics Data System (ADS)
Longo, M.; Keller, M.; Scaranello, M. A., Sr.; dos-Santos, M. N.; Xu, Y.; Huang, M.; Morton, D. C.
2017-12-01
Logging and understory fires are major drivers of tropical forest degradation, reducing carbon stocks and changing forest structure, composition, and dynamics. In contrast to deforested areas, sites that are disturbed by logging and fires retain some, albeit severely altered, forest structure and function. In this study we simulated selective logging using the Ecosystem Demography Model (ED-2) to investigate the impact of a broad range of logging techniques, harvest intensities, and recurrence cycles on the long-term dynamics of Amazon forests, including the magnitude and duration of changes in forest flammability following timber extraction. Model results were evaluated using eddy covariance towers at logged sites at the Tapajos National Forest in Brazil and data on long-term dynamics reported in the literature. ED-2 is able to reproduce both the fast (< 5yr) recovery of water, energy fluxes compared to flux tower, and the typical, field-observed, decadal time scales for biomass recovery when no additional logging occurs. Preliminary results using the original ED-2 fire model show that canopy cover loss of forests under high-intensity, conventional logging cause sufficient drying to support more intense fires. These results indicate that under intense degradation, forests may shift to novel disturbance regimes, severely reducing carbon stocks, and inducing long-term changes in forest structure and composition from recurrent fires.
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Posacki, Silvia; Cappellari, Michele; Treu, Tommaso; Pellegrini, Silvia; Ciotti, Luca
2015-01-01
We present an investigation about the shape of the initial mass function (IMF) of early-type galaxies (ETGs), based on a joint lensing and dynamical analysis, and on stellar population synthesis models, for a sample of 55 lens ETGs identified by the Sloan Lens Advanced Camera for Surveys (SLACS). We construct axisymmetric dynamical models based on the Jeans equations which allow for orbital anisotropy and include a dark matter halo. The models reproduce in detail the observed Hubble Space Telescope photometry and are constrained by the total projected mass within the Einstein radius and the stellar velocity dispersion (σ) within the Sloan Digital Sky Survey fibres. Comparing the dynamically-derived stellar mass-to-light ratios (M*/L)dyn, obtained for an assumed halo slope ρh ∝ r-1, to the stellar population ones (M*/L)Salp, derived from full-spectrum fitting and assuming a Salpeter IMF, we infer the mass normalization of the IMF. Our results confirm the previous analysis by the SLACS team that the mass normalization of the IMF of high-σ galaxies is consistent on average with a Salpeter slope. Our study allows for a fully consistent study of the trend between IMF and σ for both the SLACS and atlas3D samples, which explore quite different σ ranges. The two samples are highly complementary, the first being essentially σ selected, and the latter volume-limited and nearly mass selected. We find that the two samples merge smoothly into a single trend of the form log α = (0.38 ± 0.04) × log (σe/200 km s-1) + ( - 0.06 ± 0.01), where α = (M*/L)dyn/(M*/L)Salp and σe is the luminosity averaged σ within one effective radius Re. This is consistent with a systematic variation of the IMF normalization from Kroupa to Salpeter in the interval σe ≈ 90-270 km s-1.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Takayama, T.; Fujii, T.; Yamamoto, K.
2014-12-01
Many geologists have discussed slope instability caused by gas-hydrate dissociation, which could make movable fluid in pore space of sediments. However, physical property changes caused by gas hydrate dissociation would not be so simple. Moreover, during the period of natural gas-production from gas-hydrate reservoir applying depressurization method would be completely different phenomena from dissociation processes in nature, because it could not be caused excess pore pressure, even though gas and water exist. Hence, in all cases, physical properties of gas-hydrate bearing sediments and that of their cover sediments are quite important to consider this phenomena, and to carry out simulation to solve focusing phenomena during gas hydrate dissociation periods. Daini-Atsumi knoll that was the first offshore gas-production test site from gas-hydrate is partially covered by slumps. Fortunately, one of them was penetrated by both Logging-While-Drilling (LWD) hole and pressure-coring hole. As a result of LWD data analyses and core analyses, we have understood density structure of sediments from seafloor to Bottom Simulating Reflector (BSR). The results are mentioned as following. ・Semi-confined slump showed high-density, relatively. It would be explained by over-consolidation that was result of layer-parallel compression caused by slumping. ・Bottom sequence of slump has relative high-density zones. It would be explained by shear-induced compaction along slide plane. ・Density below slump tends to increase in depth. It is reasonable that sediments below slump deposit have been compacting as normal consolidation. ・Several kinds of log-data for estimating physical properties of gas-hydrate reservoir sediments have been obtained. It will be useful for geological model construction from seafloor until BSR. We can use these results to consider geological model not only for slope instability at slumping, but also for slope stability during depressurized period of gas production from gas-hydrate. AcknowledgementThis study was supported by funding from the Research Consortium for Methane Hydrate Resources in Japan (MH21 Research Consortium) planned by the Ministry of Economy, Trade and Industry (METI).
ERIC Educational Resources Information Center
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm
2016-01-01
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Estimating the footprint of pollution on coral reefs with models of species turnover.
Brown, Christopher J; Hamilton, Richard J
2018-01-15
Ecological communities typically change along gradients of human impact, although it is difficult to estimate the footprint of impacts for diffuse threats such as pollution. We developed a joint model (i.e., one that includes multiple species and their interactions with each other and environmental covariates) of benthic habitats on lagoonal coral reefs and used it to infer change in benthic composition along a gradient of distance from logging operations. The model estimated both changes in abundances of benthic groups and their compositional turnover, a type of beta diversity. We used the model to predict the footprint of turbidity impacts from past and recent logging. Benthic communities far from logging were dominated by branching corals, whereas communities close to logging had higher cover of dead coral, massive corals, and soft sediment. Recent impacts were predicted to be small relative to the extensive impacts of past logging because recent logging has occurred far from lagoonal reefs. Our model can be used more generally to estimate the footprint of human impacts on ecosystems and evaluate the benefits of conservation actions for ecosystems. © 2018 Society for Conservation Biology.
Models for Gas Hydrate-Bearing Sediments Inferred from Hydraulic Permeability and Elastic Velocities
Lee, Myung W.
2008-01-01
Elastic velocities and hydraulic permeability of gas hydrate-bearing sediments strongly depend on how gas hydrate accumulates in pore spaces and various gas hydrate accumulation models are proposed to predict physical property changes due to gas hydrate concentrations. Elastic velocities and permeability predicted from a cementation model differ noticeably from those from a pore-filling model. A nuclear magnetic resonance (NMR) log provides in-situ water-filled porosity and hydraulic permeability of gas hydrate-bearing sediments. To test the two competing models, the NMR log along with conventional logs such as velocity and resistivity logs acquired at the Mallik 5L-38 well, Mackenzie Delta, Canada, were analyzed. When the clay content is less than about 12 percent, the NMR porosity is 'accurate' and the gas hydrate concentrations from the NMR log are comparable to those estimated from an electrical resistivity log. The variation of elastic velocities and relative permeability with respect to the gas hydrate concentration indicates that the dominant effect of gas hydrate in the pore space is the pore-filling characteristic.
NASA Astrophysics Data System (ADS)
Abreu-Vicente, J.; Kainulainen, J.; Stutz, A.; Henning, Th.; Beuther, H.
2015-09-01
We present the first study of the relationship between the column density distribution of molecular clouds within nearby Galactic spiral arms and their evolutionary status as measured from their stellar content. We analyze a sample of 195 molecular clouds located at distances below 5.5 kpc, identified from the ATLASGAL 870 μm data. We define three evolutionary classes within this sample: starless clumps, star-forming clouds with associated young stellar objects, and clouds associated with H ii regions. We find that the N(H2) probability density functions (N-PDFs) of these three classes of objects are clearly different: the N-PDFs of starless clumps are narrowest and close to log-normal in shape, while star-forming clouds and H ii regions exhibit a power-law shape over a wide range of column densities and log-normal-like components only at low column densities. We use the N-PDFs to estimate the evolutionary time-scales of the three classes of objects based on a simple analytic model from literature. Finally, we show that the integral of the N-PDFs, the dense gas mass fraction, depends on the total mass of the regions as measured by ATLASGAL: more massive clouds contain greater relative amounts of dense gas across all evolutionary classes. Appendices are available in electronic form at http://www.aanda.org
Hallifax, D; Houston, J B
2009-03-01
Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.
Tang, G.; Yuan, F.; Bisht, G.; ...
2015-12-17
We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
Mota Navarro, Roberto; Larralde, Hernán
2017-01-01
We present an agent based model of a single asset financial market that is capable of replicating most of the non-trivial statistical properties observed in real financial markets, generically referred to as stylized facts. In our model agents employ strategies inspired on those used in real markets, and a realistic trade mechanism based on a double auction order book. We study the role of the distinct types of trader on the return statistics: specifically, correlation properties (or lack thereof), volatility clustering, heavy tails, and the degree to which the distribution can be described by a log-normal. Further, by introducing the practice of "profit taking", our model is also capable of replicating the stylized fact related to an asymmetry in the distribution of losses and gains.
2017-01-01
We present an agent based model of a single asset financial market that is capable of replicating most of the non-trivial statistical properties observed in real financial markets, generically referred to as stylized facts. In our model agents employ strategies inspired on those used in real markets, and a realistic trade mechanism based on a double auction order book. We study the role of the distinct types of trader on the return statistics: specifically, correlation properties (or lack thereof), volatility clustering, heavy tails, and the degree to which the distribution can be described by a log-normal. Further, by introducing the practice of “profit taking”, our model is also capable of replicating the stylized fact related to an asymmetry in the distribution of losses and gains. PMID:28245251
Fleming, W.J.; Grue, C.E.
1981-01-01
The responses of brain and plasma cholinesterase (ChE) activities were examined in mallard ducks, bobwhite quail, barn owls, starlings, and common grackles given oral doses of dicrotophos, an organophosphorus insecticide. Up to an eightfold difference in response of brain ChE activity to dicrotophos was found among these species. Brain ChE activity recovered to within 2 SD of normal within 26 days after being depressed 55 to 64%. Recovery of brain ChE activity was similar among species and followed the model Y = a + b (log10X).
Multivariate regression model for predicting yields of grade lumber from yellow birch sawlogs
Andrew F. Howard; Daniel A. Yaussy
1986-01-01
A multivariate regression model was developed to predict green board-foot yields for the common grades of factory lumber processed from yellow birch factory-grade logs. The model incorporates the standard log measurements of scaling diameter, length, proportion of scalable defects, and the assigned USDA Forest Service log grade. Differences in yields between band and...
Computer analysis of digital well logs
Scott, James H.
1984-01-01
A comprehensive system of computer programs has been developed by the U.S. Geological Survey for analyzing digital well logs. The programs are operational on a minicomputer in a research well-logging truck, making it possible to analyze and replot the logs while at the field site. The minicomputer also serves as a controller of digitizers, counters, and recorders during acquisition of well logs. The analytical programs are coordinated with the data acquisition programs in a flexible system that allows the operator to make changes quickly and easily in program variables such as calibration coefficients, measurement units, and plotting scales. The programs are designed to analyze the following well-logging measurements: natural gamma-ray, neutron-neutron, dual-detector density with caliper, magnetic susceptibility, single-point resistance, self potential, resistivity (normal and Wenner configurations), induced polarization, temperature, sonic delta-t, and sonic amplitude. The computer programs are designed to make basic corrections for depth displacements, tool response characteristics, hole diameter, and borehole fluid effects (when applicable). Corrected well-log measurements are output to magnetic tape or plotter with measurement units transformed to petrophysical and chemical units of interest, such as grade of uranium mineralization in percent eU3O8, neutron porosity index in percent, and sonic velocity in kilometers per second.
A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.
Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F
2016-01-01
Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.
NASA Astrophysics Data System (ADS)
Abanov, Ar.; Chubukov, Andrey V.; Schmalian, J.
2003-03-01
We present the full analysis of the normal state properties of the spin-fermion model near the antiferromagnetic instability in two dimensions. The model describes low-energy fermions interacting with their own collective spin fluctuations, which soften at the antiferromagnetic transition. We argue that in 2D, the system has two typical energies-an effective spin-fermion interaction bar g and an energy ysf below which the system behaves as a Fermi liquid. The ratio of the two determines the dimensionless coupling constant for spin-fermion interaction lambda (2) alpha /line g /omega _{sf} . We show that u scales with the spin correlation length and diverges at criticality. This divergence implies that the conventional perturbative expansion breaks down. We develop a novel approach to the problem-the expansion in either the inverse number of hot spots in the Brillouin zone, or the inverse number of fermionic flavours-which allows us to explicitly account for all terms which diverge as powers of u, and treat the remaining, O(logu) terms in the RG formalism. We apply this technique to study the properties of the spin-fermion model in various frequency and temperature regimes. We present the results for the fermionic spectral function, spin susceptibility, optical conductivity and other observables. We compare our results in detail with the normal state data for the cuprates, and argue that the spin-fermion model is capable of explaining the anomalous normal state properties of the high Tc materials. We also show that the conventional Ӓ theory of the quantum-critical behaviour is inapplicable in 2D due to the singularity of the Ӓ vertex.
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Viscosity and transient electric birefringence study of clay colloidal aggregation.
Bakk, Audun; Fossum, Jon O; da Silva, Geraldo J; Adland, Hans M; Mikkelsen, Arne; Elgsaeter, Arnljot
2002-02-01
We study a synthetic clay suspension of laponite at different particle and NaCl concentrations by measuring stationary shear viscosity and transient electrically induced birefringence (TEB). On one hand the viscosity data are consistent with the particles being spheres and the particles being associated with large amount bound water. On the other hand the viscosity data are also consistent with the particles being asymmetric, consistent with single laponite platelets associated with a very few monolayers of water. We analyze the TEB data by employing two different models of aggregate size (effective hydrodynamic radius) distribution: (1) bidisperse model and (2) log-normal distributed model. Both models fit, in the same manner, fairly well to the experimental TEB data and they indicate that the suspension consists of polydisperse particles. The models also appear to confirm that the aggregates increase in size vs increasing ionic strength. The smallest particles at low salt concentrations seem to be monomers and oligomers.
NASA Astrophysics Data System (ADS)
Wang, Yu; Liu, Qun
2013-01-01
Surplus-production models are widely used in fish stock assessment and fisheries management due to their simplicity and lower data demands than age-structured models such as Virtual Population Analysis. The CEDA (catch-effort data analysis) and ASPIC (a surplus-production model incorporating covariates) computer packages are data-fitting or parameter estimation tools that have been developed to analyze catch-and-effort data using non-equilibrium surplus production models. We applied CEDA and ASPIC to the hairtail ( Trichiurus japonicus) fishery in the East China Sea. Both packages produced robust results and yielded similar estimates. In CEDA, the Schaefer surplus production model with log-normal error assumption produced results close to those of ASPIC. CEDA is sensitive to the choice of initial proportion, while ASPIC is not. However, CEDA produced higher R 2 values than ASPIC.
When is hardwood cable logging economical?
Chris B. LeDoux
1985-01-01
Using cable logging to harvest eastern hardwood logs on steep terrain can result in low production rates and high costs per unit of wood produced. Logging managers can improve productivity and profitability by knowing how the interaction of site-specific variables and cable logging equipment affect costs and revenues. Data from selected field studies and forest model...
Mathematical model of a smoldering log.
Fernando de Souza Costa; David Sandberg
2004-01-01
A mathematical model is developed describing the natural smoldering of logs. It is considered the steady one dimensional propagation of infinitesimally thin fronts of drying, pyrolysis, and char oxidation in a horizontal semi-infinite log. Expressions for the burn rates, distribution profiles of temperature, and positions of the drying, pyrolysis, and smoldering fronts...
Long-term effects of retinopathy of prematurity (ROP) on rod and rod-driven function.
Harris, Maureen E; Moskowitz, Anne; Fulton, Anne B; Hansen, Ronald M
2011-02-01
The purpose of this study was to determine whether recovery of scotopic sensitivity occurs in human ROP, as it does in the rat models of ROP. Following a cross-sectional design, scotopic electroretinographic (ERG) responses to full-field stimuli were recorded from 85 subjects with a history of preterm birth. In 39 of these subjects, dark adapted visual threshold was also measured. Subjects were tested post-term as infants (median age 2.5 months) or at older ages (median age 10.5 years) and stratified by severity of ROP: severe, mild, or none. Rod photoreceptor sensitivity, S (ROD), was derived from the a-wave, and post-receptor sensitivity, log σ, was calculated from the b-wave stimulus-response function. Dark adapted visual threshold was measured using a forced-choice preferential procedure. For S (ROD), the deficit from normal for age varied significantly with ROP severity but not with age group. For log σ, in mild ROP, the deficit was smaller in older subjects than in infants, while in severe ROP, the deficit was quite large in both age groups. In subjects who never had ROP, S (ROD) and log σ in both age groups were similar to those in term born controls. Deficits in dark adapted threshold and log σ were correlated in mild but not in severe ROP. The data are evidence that sensitivity of the post-receptor retina improves in those with a history of mild ROP. We speculate that beneficial reorganization of the post-receptor neural circuitry occurs in mild but not in severe ROP.
Blakely, William F; Bolduc, David L; Debad, Jeff; Sigal, George; Port, Matthias; Abend, Michael; Valente, Marco; Drouet, Michel; Hérodin, Francis
2018-07-01
Use of plasma proteomic and hematological biomarkers represents a promising approach to provide useful diagnostic information for assessment of the severity of hematopoietic acute radiation syndrome. Eighteen baboons were evaluated in a radiation model that underwent total-body and partial-body irradiations at doses of Co gamma rays from 2.5 to 15 Gy at dose rates of 6.25 cGy min and 32 cGy min. Hematopoietic acute radiation syndrome severity levels determined by an analysis of blood count changes measured up to 60 d after irradiation were used to gauge overall hematopoietic acute radiation syndrome severity classifications. A panel of protein biomarkers was measured on plasma samples collected at 0 to 28 d after exposure using electrochemiluminescence-detection technology. The database was split into two distinct groups (i.e., "calibration," n = 11; "validation," n = 7). The calibration database was used in an initial stepwise regression multivariate model-fitting approach followed by down selection of biomarkers for identification of subpanels of hematopoietic acute radiation syndrome-responsive biomarkers for three time windows (i.e., 0-2 d, 2-7 d, 7-28 d). Model 1 (0-2 d) includes log C-reactive protein (p < 0.0001), log interleukin-13 (p < 0.0054), and procalcitonin (p < 0.0316) biomarkers; model 2 (2-7 d) includes log CD27 (p < 0.0001), log FMS-related tyrosine kinase 3 ligand (p < 0.0001), log serum amyloid A (p < 0.0007), and log interleukin-6 (p < 0.0002); and model 3 (7-28 d) includes log CD27 (p < 0.0012), log serum amyloid A (p < 0.0002), log erythropoietin (p < 0.0001), and log CD177 (p < 0.0001). The predicted risk of radiation injury categorization values, representing the hematopoietic acute radiation syndrome severity outcome for the three models, produced least squares multiple regression fit confidences of R = 0.73, 0.82, and 0.75, respectively. The resultant algorithms support the proof of concept that plasma proteomic biomarkers can supplement clinical signs and symptoms to assess hematopoietic acute radiation syndrome risk severity.
Prediction of Nonalcoholic Fatty Liver Disease Via a Novel Panel of Serum Adipokines
Jamali, Raika; Arj, Abbas; Razavizade, Mohsen; Aarabi, Mohammad Hossein
2016-01-01
Abstract Considering limitations of liver biopsy for diagnosis of nonalcoholic liver disease (NAFLD), biomarkers’ panels were proposed. The aims of this study were to establish models based on serum adipokines for discriminating NAFLD from healthy individuals and nonalcoholic steatohepatitis (NASH) from simple steatosis. This case-control study was conducted in patients with persistent elevated serum aminotransferase levels and fatty liver on ultrasound. Individuals with evidence of alcohol consumption, hepatotoxic medication, viral hepatitis, and known liver disease were excluded. Liver biopsy was performed in the remaining patients to distinguish NAFLD/NASH. Histologic findings were interpreted using “nonalcoholic fatty liver activity score.” Control group consisted of healthy volunteers with normal physical examination, liver function tests, and liver ultrasound. Binary logistic regression analysis was applied to ascertain the effects of independent variables on the likelihood that participants have NAFLD/NASH. Decreased serum adiponectin and elevated serum visfatin, IL-6, TNF-a were associated with an increased likelihood of exhibiting NAFLD. NAFLD discriminant score was developed as the following: [(−0.298 × adiponectin) + (0.022 × TNF-a) + (1.021 × Log visfatin) + (0.709 × Log IL-6) + 1.154]. In NAFLD discriminant score, 86.4% of original grouped cases were correctly classified. Discriminant score threshold value of (−0.29) yielded a sensitivity and specificity of 91% and 83% respectively, for discriminating NAFLD from healthy controls. Decreased serum adiponectin and elevated serum visfatin, IL-8, TNF-a were correlated with an increased probability of NASH. NASH discriminant score was proposed as the following: [(−0.091 × adiponectin) + (0.044 × TNF-a) + (1.017 × Log visfatin) + (0.028 × Log IL-8) − 1.787] In NASH model, 84% of original cases were correctly classified. Discriminant score threshold value of (−0.22) yielded a sensitivity and specificity of 90% and 66% respectively, for separating NASH from simple steatosis. New discriminant scores were introduced for differentiating NAFLD/NASH patients with a high accuracy. If verified by future studies, application of suggested models for screening of NAFLD/NASH seems reasonable. PMID:26844476
Taslimitehrani, Vahid; Dong, Guozhu; Pereira, Naveen L; Panahiazar, Maryam; Pathak, Jyotishman
2016-04-01
Computerized survival prediction in healthcare identifying the risk of disease mortality, helps healthcare providers to effectively manage their patients by providing appropriate treatment options. In this study, we propose to apply a classification algorithm, Contrast Pattern Aided Logistic Regression (CPXR(Log)) with the probabilistic loss function, to develop and validate prognostic risk models to predict 1, 2, and 5year survival in heart failure (HF) using data from electronic health records (EHRs) at Mayo Clinic. The CPXR(Log) constructs a pattern aided logistic regression model defined by several patterns and corresponding local logistic regression models. One of the models generated by CPXR(Log) achieved an AUC and accuracy of 0.94 and 0.91, respectively, and significantly outperformed prognostic models reported in prior studies. Data extracted from EHRs allowed incorporation of patient co-morbidities into our models which helped improve the performance of the CPXR(Log) models (15.9% AUC improvement), although did not improve the accuracy of the models built by other classifiers. We also propose a probabilistic loss function to determine the large error and small error instances. The new loss function used in the algorithm outperforms other functions used in the previous studies by 1% improvement in the AUC. This study revealed that using EHR data to build prediction models can be very challenging using existing classification methods due to the high dimensionality and complexity of EHR data. The risk models developed by CPXR(Log) also reveal that HF is a highly heterogeneous disease, i.e., different subgroups of HF patients require different types of considerations with their diagnosis and treatment. Our risk models provided two valuable insights for application of predictive modeling techniques in biomedicine: Logistic risk models often make systematic prediction errors, and it is prudent to use subgroup based prediction models such as those given by CPXR(Log) when investigating heterogeneous diseases. Copyright © 2016 Elsevier Inc. All rights reserved.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
Ultraviolet-C Light for Treatment of Candida albicans Burn Infection in Mice
Dai, Tianhong; Kharkwal, Gitika B; Zhao, Jie; St. Denis, Tyler G; Wu, Qiuhe; Xia, Yumin; Huang, Liyi; Sharma, Sulbha K; d’Enfert, Christophe; Hamblin, Michael R
2011-01-01
Burn patients are at high risk of invasive fungal infections, which are a leading cause of morbidity, mortality, and related expense exacerbated by the emergence of drug resistant fungal strains. In this study, we investigated the use of UVC light (254-nm) for the treatment of Candida albicans infection in mouse third degree burns. In-vitro studies demonstrated that UVC could selectively kill the pathogenic yeast C. albicans compared to a normal keratinocyte cell line in a light exposure dependent manner. A mouse model of chronic C. albicans infection in non-lethal 3rd degree burns was developed. The C. albicans strain was stably transformed with a version of the Gaussia princeps luciferase gene that allowed real-time bioluminescence imaging of the progression of C. albicans infection. UVC treatment with a single exposure carried out on day 0 (30 minutes post-infection) gave an average 2.16-log10-unit (99.2%) loss of fungal luminescence when 2.92 J/cm2 UVC had been delivered, while UVC 24-hours post-infection gave 1.94-log10-unit (95.8%) reduction of fungal luminescence after 6.48 J/cm2. Statistical analysis demonstrated that UVC treatment carried out both on both day 0 and day 1 significantly reduced the fungal bioburden of infected burns. UVC was found to be superior to a topical antifungal drug, nystatin cream. UVC was tested on normal mouse skin and no gross damage was observed 24 hours after 6.48 J/cm2. DNA lesions (cyclobutane pyrimidine dimers) were observed by immunofluorescence in normal mouse skin immediately after a 6.48 J/cm2 UVC exposure, but the lesions were extensively repaired at 24-hours after UVC exposure. PMID:21208209
De Palma, Rodney; Ivarsson, John; Feldt, Kari; Saleh, Nawzad; Ruck, Andreas; Linder, Rikard; Settergren, Magnus
Increased mortality has been observed in those with cardiovascular diseases who are of normal body mass index (BMI) compared to the overweight and the obese. A similar association has been demonstrated in patients undergoing transcatheter aortic valve (TAVI) implantation. However, it still remains unclear whether low or normal BMI itself is unfavourable or whether this is merely a reflection of cardiac cachexia due to severe aortic stenosis. The hypothesis for the study was that weight change prior to TAVI may be associated with increased mortality following the procedure. Single centre retrospective analysis using the SWEDEHEART registry, national mortality statistics and local hospital database. Body mass index was used as the anthropomorphic measurement and patients grouped by WHO categories and weight change trajectory before and at TAVI. Kaplan-Meier survival was constructed and a Cox proportional hazard model used to evaluate predictors of outcome. Consecutive data on 493 patients with three year follow-up between 2008-2015 were evaluated. Overweight and obese body mass index categories (BMI>25) were associated with improved mortality compared to normal and underweight patients (BMI<25) (log rank p=0.02), hazard ratio of 0.68 (0.50-0.93). Weight loss trajectory was associated with increased mortality compared to stable weight (log rank p=0.01), hazard ratio 1.64 p=0.025. The pre-procedural weight trajectory of patients undergoing TAVI is an important predictor of clinical outcome after TAVI. Patients with stable weight trajectories are associated with improved mortality outcome compared to those with decreasing weight. Copyright © 2017 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Daily Magnesium Intake and Serum Magnesium Concentration among Japanese People
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
Background The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. Methods The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. Results The mean (±standard deviation) daily magnesium intake was 322 (±132), 323 (±163), and 322 (±147) mg/day for men, women, and the entire group, respectively. The mean (±standard deviation) serum magnesium concentration was 20.69 (±2.83), 20.69 (±2.88), and 20.69 (±2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log10X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. Conclusion The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed. PMID:18635902
Singh, Minerva; Evans, Damian; Coomes, David A.; Friess, Daniel A.; Suy Tan, Boun; Samean Nin, Chan
2016-01-01
This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests. PMID:27176218
Singh, Minerva; Evans, Damian; Coomes, David A; Friess, Daniel A; Suy Tan, Boun; Samean Nin, Chan
2016-01-01
This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests.
Gu, Jiaojiao; Jing, Lulu; Ma, Xiaotao; Zhang, Zhaofeng; Guo, Qianying; Li, Yong
2015-12-01
The present study aimed to explore the metabolic response of oat bran consumption in dyslipidemic rats by a high-throughput metabolomics approach. Four groups of Sprague-Dawley rats were used: N group (normal chow diet), M group (dyslipidemia induced by 4-week high-fat feeding, then normal chow diet), OL group and OH group (dyslipidemia induced, then normal chow diet supplemented with 10.8% or 43.4% naked oat bran). Intervention lasted for 12weeks. Gas chromatography quadrupole time-of-flight mass spectrometry was used to identify serum metabolite profiles. Results confirmed the effects of oat bran on improving lipidemic variables and showed distinct metabolomic profiles associated with diet intervention. A number of endogenous molecules were changed by high-fat diet and normalized following supplementation of naked oat bran. Elevated levels of serum unsaturated fatty acids including arachidonic acid (Log2Fold of change=0.70, P=.02 OH vs. M group), palmitoleic acid (Log2Fold of change=1.24, P=.02 OH vs. M group) and oleic acid (Log2Fold of change=0.66, P=.04 OH vs. M group) were detected after oat bran consumption. Furthermore, consumption of oat bran was also characterized by higher levels of methionine and S-adenosylmethionine. Pathway exploration found that most of the discriminant metabolites were involved in fatty acid biosynthesis, biosynthesis and metabolism of amino acids, microbial metabolism in diverse environments and biosynthesis of plant secondary metabolites. These results point to potential biomarkers and underlying benefit of naked oat bran in the context of diet-induced dyslipidemia and offer some insights into the mechanism exploration. Copyright © 2015 Elsevier Inc. All rights reserved.
The letter contrast sensitivity test: clinical evaluation of a new design.
Haymes, Sharon A; Roberts, Kenneth F; Cruess, Alan F; Nicolela, Marcelo T; LeBlanc, Raymond P; Ramsey, Michael S; Chauhan, Balwantray C; Artes, Paul H
2006-06-01
To compare the reliability, validity, and responsiveness of the Mars Letter Contrast Sensitivity (CS) Test to the Pelli-Robson CS Chart. One eye of 47 normal control subjects, 27 patients with open-angle glaucoma, and 17 with age-related macular degeneration (AMD) was tested twice with the Mars test and twice with the Pelli-Robson test, in random order on separate days. In addition, 17 patients undergoing cataract surgery were tested, once before and once after surgery. The mean Mars CS was 1.62 log CS (0.06 SD) for normal subjects aged 22 to 77 years, with significantly lower values in patients with glaucoma or AMD (P<0.001). Mars test-retest 95% limits of agreement (LOA) were +/-0.13, +/-0.19, and +/-0.24 log CS for normal, glaucoma, and AMD, respectively. In comparison, Pelli-Robson test-retest 95% LOA were +/-0.18, +/-0.19, and +/-0.33 log CS. The Spearman correlation between the Mars and Pelli-Robson tests was 0.83 (P<0.001). However, systematic differences were observed, particularly at the upper-normal end of the range, where Mars CS was lower than Pelli-Robson CS. After cataract surgery, Mars and Pelli-Robson effect size statistics were 0.92 and 0.88, respectively. The results indicate the Mars test has test-retest reliability equal to or better than the Pelli-Robson test and comparable responsiveness. The strong correlation between the tests provides evidence the Mars test is valid. However, systematic differences indicate normative values are likely to be different for each test. The Mars Letter CS Test is a useful and practical alternative to the Pelli-Robson CS Chart.
Chen, Juan; Chen, Hao; Zhang, Xing-wen; Lei, Kun; Kenny, Jonathan E
2015-11-01
A fluorescence quenching model using copper(II) ion (Cu(2+)) ion selective electrode (Cu-ISE) is developed. It uses parallel factor analysis (PARAFAC) to model fluorescence excitation-emission matrices (EEMs) of humic acid (HA) samples titrated with Cu(2+) to resolve fluorescence response of fluorescent components to Cu(2+) titration. Meanwhile, Cu-ISE is employed to monitor free Cu(2+) concentration ([Cu]) at each titration step. The fluorescence response of each component is fit individually to a nonlinear function of [Cu] to find the Cu(2+) conditional stability constant for that component. This approach differs from other fluorescence quenching models, including the most up-to-date multi-response model that has a problematic assumption on Cu(2+) speciation, i.e., an assumption that total Cu(2+) present in samples is a sum of [Cu] and those bound by fluorescent components without taking into consideration the contribution of non-fluorescent organic ligands and inorganic ligands to speciation of Cu(2+). This paper employs the new approach to investigate Cu(2+) binding by Pahokee peat HA (PPHA) at pH values of 6.0, 7.0, and 8.0 buffered by phosphate or without buffer. Two fluorescent components (C1 and C2) were identified by PARAFAC. For the new quenching model, the conditional stability constants (logK1 and logK2) of the two components all increased with increasing pH. In buffered solutions, the new quenching model reported logK1 = 7.11, 7.89, 8.04 for C1 and logK2 = 7.04, 7.64, 8.11 for C2 at pH 6.0, 7.0, and 8.0, respectively, nearly two log units higher than the results of the multi-response model. Without buffer, logK1 and logK2 decreased but were still high (>7) at pH 8.0 (logK1 = 7.54, logK2 = 7.95), and all the values were at least 0.5 log unit higher than those (4.83 ~ 5.55) of the multi-response model. These observations indicate that the new quenching model is more intrinsically sensitive than the multi-response model in revealing strong fluorescent binding sites of PPHA in different experimental conditions. The new model was validated by testing it with a mixture of two fluorescing Cu(2+) chelating organic compounds, i.e., l-tryptophan and salicylic acid mixed with one non-fluorescent binding compound oxalic acid titrated with Cu(2+) at pH 5.0.
A Multi-temporal Analysis of Logging Impacts on Tropical Forest Structure Using Airborne Lidar Data
NASA Astrophysics Data System (ADS)
Keller, M. M.; Pinagé, E. R.; Duffy, P.; Longo, M.; dos-Santos, M. N.; Leitold, V.; Morton, D. C.
2017-12-01
The long-term impacts of selective logging on carbon cycling and ecosystem function in tropical-forests are still uncertain. Despite improvements in selective logging detection using satellite data, quantifying changes in forest structure from logging and recovery following logging is difficult using orbital data. We analyzed the dynamics of forest structure comparing logged and unlogged forests in the Eastern Brazilian Amazon (Paragominas Municipality, Pará State) using small footprint discrete return airborne lidar data acquired in 2012 and 2014. Logging operations were conducted at the 1200 ha study site from 2006 through 2013 using reduced impact logging techniques—management practices that minimize canopy and ground damage compared to more common conventional logging. Nevertheless, logging still reduced aboveground biomass by 10% to 20% in logged areas compared to intact forests. We aggregated lidar point-cloud data at spatial scales ranging from 50 m to 250 m and developed a binomial classification model based on the height distribution of lidar returns in 2012 and validated the model against the 2014 lidar acquisition. We accurately classified intact and logged forest classes compared with field data. Classification performance improved as spatial resolution increased (AUC = 0.974 at 250 m). We analyzed the differences in canopy gaps, understory damage (based on a relative density model), and biomass (estimated from total canopy height) of intact and logged classes. As expected, logging greatly increased both canopy gap formation and understory damage. However, while the area identified as canopy gap persisted for at least 8 years (from the oldest logging treatments in 2006 to the most recent lidar acquisition in 2014), the effects of ground damage were mostly erased by vigorous understory regrowth after about 5 years. The rate of new gap formation was 6 to 7 times greater in recently logged forests compared to undisturbed forests. New gaps opened at a rate of 1.8 times greater than background even 8 years following logging demonstrating the occurrence of delayed tree mortality. Our study showed that even low-intensity anthropogenic disturbances can cause persistent changes in tropical forest structure and dynamics.
Rainford, James L; Hofreiter, Michael; Mayhew, Peter J
2016-01-08
Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.
A simulation-based approach for evaluating logging residue handling systems.
B. Bruce Bare; Benjamin A. Jayne; Brian F. Anholt
1976-01-01
Describes a computer simulation model for evaluating logging residue handling systems. The flow of resources is traced through a prespecified combination of operations including yarding, chipping, sorting, loading, transporting, and unloading. The model was used to evaluate the feasibility of converting logging residues to chips that could be used, for example, to...
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
Price vs. Performance: The Value of Next Generation Fighter Aircraft
2007-03-01
forms. Both the semi-log and log-log forms were plagued with heteroskedasticity (according to the Breusch - Pagan /Cook-Weisberg test ). The RDT&E models...from 1949-present were used to construct two models – one based on procurement costs and one based on research, design, test , and evaluation (RDT&E...fighter aircraft hedonic models include several different categories of variables. Aircraft procurement costs and research, design, test , and
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
Uncertainty evaluation with increasing borehole drilling in subsurface hydrogeological explorations
NASA Astrophysics Data System (ADS)
Amano, K.; Ohyama, T.; Kumamoto, S.; Shimo, M.
2016-12-01
Quantities of drilling boreholes have been a difficult subject for field investigators in such as subsurface hydrogeological explorations. This problem becomes a bigger in heterogeneous formations or rock masses so we need to develop quantitative criteria for evaluating uncertainties during borehole investigations.To test an uncertainty reduction with increasing boreholes, we prepared a simple hydrogeological model and virtual hydraulic tests were carried out by using this model. The model consists of 125,000 elements of which hydraulic conductivities are generated randomly from the log-normal distribution in a 2-kilometer cube. Uncertainties were calculated by the difference of head distributions between the original model and the inchoate models made by virtual hydraulic test one by one.The results show the level and the variance of uncertainty are strongly correlated to the average and variance of the hydraulic conductivities. This kind of trends also could be seen in the actual field data obtained from the deep borehole investigations in Horonobe Town, northern Hokkaido, Japan. Here, a new approach using fractional bias (FB) and normalized mean square error (NMSE) for evaluating uncertainty characteristics will be introduced and the possibility of use as an indicator for decision making (i.e. to stop borehole drilling or to continue borehole drilling) in field investigations will be discussed.
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul
2018-04-01
We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
Predicting financial market crashes using ghost singularities.
Smug, Damian; Ashwin, Peter; Sornette, Didier
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of 'ghosts of finite-time singularities' is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts.
Predicting financial market crashes using ghost singularities
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of ‘ghosts of finite-time singularities’ is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts. PMID:29596485
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2017-10-01
A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2018-04-01
The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kay, Robert T.; Mills, Patrick C.; Dunning, Charles P.; Yeskis, Douglas J.; Ursic, James R.; Vendl, Mark
2004-01-01
The effectiveness of 28 methods used to characterize the fractured Galena-Platteville aquifer at eight sites in northern Illinois and Wisconsin is evaluated. Analysis of government databases, previous investigations, topographic maps, aerial photographs, and outcrops was essential to understanding the hydrogeology in the area to be investigated. The effectiveness of surface-geophysical methods depended on site geology. Lithologic logging provided essential information for site characterization. Cores were used for stratigraphy and geotechnical analysis. Natural-gamma logging helped identify the effect of lithology on the location of secondary- permeability features. Caliper logging identified large secondary-permeability features. Neutron logs identified trends in matrix porosity. Acoustic-televiewer logs identified numerous secondary-permeability features and their orientation. Borehole-camera logs also identified a number of secondary-permeability features. Borehole ground-penetrating radar identified lithologic and secondary-permeability features. However, the accuracy and completeness of this method is uncertain. Single-point-resistance, density, and normal resistivity logs were of limited use. Water-level and water-quality data identified flow directions and indicated the horizontal and vertical distribution of aquifer permeability and the depth of the permeable features. Temperature, spontaneous potential, and fluid-resistivity logging identified few secondary-permeability features at some sites and several features at others. Flowmeter logging was the most effective geophysical method for characterizing secondary-permeability features. Aquifer tests provided insight into the permeability distribution, identified hydraulically interconnected features, the presence of heterogeneity and anisotropy, and determined effective porosity. Aquifer heterogeneity prevented calculation of accurate hydraulic properties from some tests. Different methods, such as flowmeter logging and slug testing, occasionally produced different interpretations. Aquifer characterization improved with an increase in the number of data points, the period of data collection, and the number of methods used.
Characterization of the spatial variability of channel morphology
Moody, J.A.; Troutman, B.M.
2002-01-01
The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.
Model-based variance-stabilizing transformation for Illumina microarray data.
Lin, Simon M; Du, Pan; Huber, Wolfgang; Kibbe, Warren A
2008-02-01
Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.
Klein, M; Birch, D G
2009-12-01
To determine whether the Diagnosys full-field stimulus threshold (D-FST) is a valid, sensitive and repeatable psychophysical method of measuring and following visual function in low-vision subjects. Fifty-three affected eyes of 42 subjects with severe retinal degenerative diseases (RDDs) were tested with achromatic stimuli on the D-FST. Included were subjects who were either unable to perform a static perimetric field or had non-detectable or sub-microvolt electroretinograms (ERGs). A subset of 21 eyes of 17 subjects was tested on both the D-FST and the FST2, a previous established full-field threshold test. Seven eyes of 7 normal control subjects were tested on both the D-FST and the FST2. Results for the two methods were compared with the Bland-Altman test. On the D-FST, a threshold could successfully be determined for 13 of 14 eyes with light perception (LP) only (median 0.9 +/- 1.4 log cd/m2), and all eyes determined to be counting fingers (CF; median 0.3 +/- 1.8 log cd/m2). The median full-field threshold for the normal controls was -4.3 +/- 0.6 log cd/m2 on the D-FST and -4.8 +/- 0.9 log cd/m2 on the FST2. The D-FST offers a commercially available method with a robust psychophysical algorithm and is a useful tool for following visual function in low vision subjects.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Best opening face system for sweepy, eccentric logs : a user’s guide
David W. Lewis
1985-01-01
Log breakdown simulation models have gained rapid acceptance within the sawmill industry in the last 15 years. Although they have many advantages over traditional decision making tools, the existing models do not calculate yield correctly when used to simulate the breakdown of eccentric, sweepy logs in North American sawmills producing softwood dimension lumber. In an...
2007-01-01
parameter dimension between the two models). 93 were tested.3 Model 1 log( pHits 1− pHits ) = α + β1 ∗ MetricScore (6.6) The results for each of the...505.67 oTERavg .357 .13 .007 log( pHits 1− pHits ), that is, log-odds of correct task performance, of 2.79 over the intercept only model. All... pHits 1− pHits ) = −1.15− .418× I[MT=2] − .527× I[MT=3] + 1.78×METEOR+ 1.28×METEOR × I[MT=2] + 1.86×METEOR × I[MT=3] (6.7) Model 3 log( pHits 1− pHits
NASA Astrophysics Data System (ADS)
Vásquez Lavín, F. A.; Hernandez, J. I.; Ponce, R. D.; Orrego, S. A.
2017-07-01
During recent decades, water demand estimation has gained considerable attention from scholars. From an econometric perspective, the most used functional forms include log-log and linear specifications. Despite the advances in this field and the relevance for policymaking, little attention has been paid to the functional forms used in these estimations, and most authors have not provided justifications for their selection of functional forms. A discrete continuous choice model of the residential water demand is estimated using six functional forms (log-log, full-log, log-quadratic, semilog, linear, and Stone-Geary), and the expected consumption and price elasticity are evaluated. From a policy perspective, our results highlight the relevance of functional form selection for both the expected consumption and price elasticity.
Tobin, Jade; Walach, Jan; de Beer, Dalene; Williams, Paul J; Filzmoser, Peter; Walczak, Beata
2017-11-24
While analyzing chromatographic data, it is necessary to preprocess it properly before exploration and/or supervised modeling. To make chromatographic signals comparable, it is crucial to remove the scaling effect, caused by differences in overall sample concentrations. One of the efficient methods of signal scaling is Probabilistic Quotient Normalization (PQN) [1]. However, it can be applied only to data for which the majority of features do not vary systematically among the studied classes of signals. When studying the influence of the traditional "fermentation" (oxidation) process on the concentration of 56 individual peaks detected in rooibos plant material, this assumption is not fulfilled. In this case, the only possible solution is the analysis of pairwise log-ratios, which are not influenced by the scaling constant. To estimate significant features, i.e., peaks differentiating the studied classes of samples (green and fermented rooibos plant material), we propose the application of rPLR (robust pair-wise log-ratios) as proposed by Walach et al. [2]. It allows for fast computation and identification of the significant features in terms of original variables (peaks) which is problematic, while working with the unfolded pair-wise log ratios. As demonstrated, it can be applied to designed data sets and in the case of contaminated data, it allows proper conclusions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, H.; Gu, H.
2017-12-01
A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.
NASA Astrophysics Data System (ADS)
Usselman, Robert J.; Russek, Stephen E.; Klem, Michael T.; Allen, Mark A.; Douglas, Trevor; Young, Mark; Idzerda, Yves U.; Singel, David J.
2012-10-01
Electron magnetic resonance (EMR) spectroscopy was used to determine the magnetic properties of maghemite (γ-Fe2O3) nanoparticles formed within size-constraining Listeria innocua (LDps)-(DNA-binding protein from starved cells) protein cages that have an inner diameter of 5 nm. Variable-temperature X-band EMR spectra exhibited broad asymmetric resonances with a superimposed narrow peak at a gyromagnetic factor of g ≈ 2. The resonance structure, which depends on both superparamagnetic fluctuations and inhomogeneous broadening, changes dramatically as a function of temperature, and the overall linewidth becomes narrower with increasing temperature. Here, we compare two different models to simulate temperature-dependent lineshape trends. The temperature dependence for both models is derived from a Langevin behavior of the linewidth resulting from "anisotropy melting." The first uses either a truncated log-normal distribution of particle sizes or a bi-modal distribution and then a Landau-Liftshitz lineshape to describe the nanoparticle resonances. The essential feature of this model is that small particles have narrow linewidths and account for the g ≈ 2 feature with a constant resonance field, whereas larger particles have broad linewidths and undergo a shift in resonance field. The second model assumes uniform particles with a diameter around 4 nm and a random distribution of uniaxial anisotropy axes. This model uses a more precise calculation of the linewidth due to superparamagnetic fluctuations and a random distribution of anisotropies. Sharp features in the spectrum near g ≈ 2 are qualitatively predicted at high temperatures. Both models can account for many features of the observed spectra, although each has deficiencies. The first model leads to a nonphysical increase in magnetic moment as the temperature is increased if a log normal distribution of particles sizes is used. Introducing a bi-modal distribution of particle sizes resolves the unphysical increase in moment with temperature. The second model predicts low-temperature spectra that differ significantly from the observed spectra. The anisotropy energy density K1, determined by fitting the temperature-dependent linewidths, was ˜50 kJ/m3, which is considerably larger than that of bulk maghemite. The work presented here indicates that the magnetic properties of these size-constrained nanoparticles and more generally metal oxide nanoparticles with diameters d < 5 nm are complex and that currently existing models are not sufficient for determining their magnetic resonance signatures.
Multicriteria evaluation of simulated logging scenarios in a tropical rain forest.
Huth, Andreas; Drechsler, Martin; Köhler, Peter
2004-07-01
Forest growth models are useful tools for investigating the long-term impacts of logging. In this paper, the results of the rain forest growth model FORMIND were assessed by a multicriteria decision analysis. The main processes covered by FORMIND include tree growth, mortality, regeneration and competition. Tree growth is calculated based on a carbon balance approach. Trees compete for light and space; dying large trees fall down and create gaps in the forest. Sixty-four different logging scenarios for an initially undisturbed forest stand at Deramakot (Malaysia) were simulated. The scenarios differ regarding the logging cycle, logging method, cutting limit and logging intensity. We characterise the impacts with four criteria describing the yield, canopy opening and changes in species composition. Multicriteria decision analysis was used for the first time to evaluate the scenarios and identify the efficient ones. Our results plainly show that reduced-impact logging scenarios are more 'efficient' than the others, since in these scenarios forest damage is minimised without significantly reducing yield. Nevertheless, there is a trade-off between yield and achieving a desired ecological state of logged forest; the ecological state of the logged forests can only be improved by reducing yields and enlarging the logging cycles. Our study also demonstrates that high cutting limits or low logging intensities cannot compensate for the high level of damage caused by conventional logging techniques.
From the track to the ocean: Using flow control to improve marine bio-logging tags for cetaceans
Fiore, Giovani; Anderson, Erik; Garborg, C. Spencer; Murray, Mark; Johnson, Mark; Moore, Michael J.; Howle, Laurens
2017-01-01
Bio-logging tags are an important tool for the study of cetaceans, but superficial tags inevitably increase hydrodynamic loading. Substantial forces can be generated by tags on fast-swimming animals, potentially affecting behavior and energetics or promoting early tag removal. Streamlined forms have been used to reduce loading, but these designs can accelerate flow over the top of the tag. This non-axisymmetric flow results in large lift forces (normal to the animal) that become the dominant force component at high speeds. In order to reduce lift and minimize total hydrodynamic loading this work presents a new tag design (Model A) that incorporates a hydrodynamic body, a channel to reduce fluid speed differences above and below the housing and wing to redirect flow to counter lift. Additionally, three derivatives of the Model A design were used to examine the contribution of individual flow control features to overall performance. Hydrodynamic loadings of four models were compared using computational fluid dynamics (CFD). The Model A design eliminated all lift force and generated up to ~30 N of downward force in simulated 6 m/s aligned flow. The simulations were validated using particle image velocimetry (PIV) to experimentally characterize the flow around the tag design. The results of these experiments confirm the trends predicted by the simulations and demonstrate the potential benefit of flow control elements for the reduction of tag induced forces on the animal. PMID:28196148
Trophic magnification of PCBs and its relationship to the octanol-water partition coefficient
Walters, D.M.; Mills, M.A.; Cade, B.S.; Burkard, L.P.
2011-01-01
We investigated polychlorinated biphenyl (PCB) bioaccumulation relative to octanol-water partition coefficient (KOW) and organism trophic position (TP) at the Lake Hartwell Superfund site (South Carolina). We measured PCBs (127 congeners) and stable isotopes (??15N) in sediment, organic matter, phytoplankton, zooplankton, macroinvertebrates, and fish. TP, as calculated from ??15N, was significantly, positively related to PCB concentrations, and food web trophic magnification factors (TMFs) ranged from 1.5-6.6 among congeners. TMFs of individual congeners increased strongly with log KOW, as did the predictive power (r2) of individual TP-PCB regression models used to calculate TMFs. We developed log KOW-TMF models for eight food webs with vastly different environments (freshwater, marine, arctic, temperate) and species composition (cold- vs warmblooded consumers). The effect of KOW on congener TMFs varied strongly across food webs (model slopes 0.0-15.0) because the range of TMFs among studies was also highly variable. We standardized TMFs within studies to mean = 0, standard deviation (SD) = 1 to normalize for scale differences and found a remarkably consistent KOW effect on TMFs (no difference in model slopes among food webs). Our findings underscore the importance of hydrophobicity (as characterized by KOW) in regulating bioaccumulation of recalcitrant compounds in aquatic systems, and demonstrate that relationships between chemical KOW and bioaccumulation from field studies are more generalized than previously recognized. ?? This article not subject to U.S. Copyright. Published 2011 by the American Chemical Society.
Trophic magnification of PCBs and Its relationship to the octanol-water partition coefficient.
Walters, David M; Mills, Marc A; Cade, Brian S; Burkard, Lawrence P
2011-05-01
We investigated polychlorinated biphenyl (PCB) bioaccumulation relative to octanol-water partition coefficient (K(OW)) and organism trophic position (TP) at the Lake Hartwell Superfund site (South Carolina). We measured PCBs (127 congeners) and stable isotopes (δ¹⁵N) in sediment, organic matter, phytoplankton, zooplankton, macroinvertebrates, and fish. TP, as calculated from δ¹⁵N, was significantly, positively related to PCB concentrations, and food web trophic magnification factors (TMFs) ranged from 1.5-6.6 among congeners. TMFs of individual congeners increased strongly with log K(OW), as did the predictive power (r²) of individual TP-PCB regression models used to calculate TMFs. We developed log K(OW)-TMF models for eight food webs with vastly different environments (freshwater, marine, arctic, temperate) and species composition (cold- vs warmblooded consumers). The effect of K(OW) on congener TMFs varied strongly across food webs (model slopes 0.0-15.0) because the range of TMFs among studies was also highly variable. We standardized TMFs within studies to mean = 0, standard deviation (SD) = 1 to normalize for scale differences and found a remarkably consistent K(OW) effect on TMFs (no difference in model slopes among food webs). Our findings underscore the importance of hydrophobicity (as characterized by K(OW)) in regulating bioaccumulation of recalcitrant compounds in aquatic systems, and demonstrate that relationships between chemical K(OW) and bioaccumulation from field studies are more generalized than previously recognized.
NASA Astrophysics Data System (ADS)
Yang, Xiang I. A.; Park, George Ilhwan; Moin, Parviz
2017-10-01
Log-layer mismatch refers to a chronic problem found in wall-modeled large-eddy simulation (WMLES) or detached-eddy simulation, where the modeled wall-shear stress deviates from the true one by approximately 15 % . Many efforts have been made to resolve this mismatch. The often-used fixes, which are generally ad hoc, include modifying subgrid-scale stress models, adding a stochastic forcing, and moving the LES-wall-model matching location away from the wall. An analysis motivated by the integral wall-model formalism suggests that log-layer mismatch is resolved by the built-in physics-based temporal filtering. In this work we investigate in detail the effects of local filtering on log-layer mismatch. We show that both local temporal filtering and local wall-parallel filtering resolve log-layer mismatch without moving the LES-wall-model matching location away from the wall. Additionally, we look into the momentum balance in the near-wall region to provide an alternative explanation of how LLM occurs, which does not necessarily rely on the numerical-error argument. While filtering resolves log-layer mismatch, the quality of the wall-shear stress fluctuations predicted by WMLES does not improve with our remedy. The wall-shear stress fluctuations are highly underpredicted due to the implied use of LES filtering. However, good agreement can be found when the WMLES data are compared to the direct numerical simulation data filtered at the corresponding WMLES resolutions.
Are CO Observations of Interstellar Clouds Tracing the H2?
NASA Astrophysics Data System (ADS)
Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.
2010-01-01
Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.
Hou, Fang; Huang, Chang-Bing; Lesmes, Luis; Feng, Li-Xia; Tao, Liming; Zhou, Yi-Feng; Lu, Zhong-Lin
2010-01-01
Purpose. The qCSF method is a novel procedure for rapid measurement of spatial contrast sensitivity functions (CSFs). It combines Bayesian adaptive inference with a trial-to-trial information gain strategy, to directly estimate four parameters defining the observer's CSF. In the present study, the suitability of the qCSF method for clinical application was examined. Methods. The qCSF method was applied to rapidly assess spatial CSFs in 10 normal and 8 amblyopic participants. The qCSF was evaluated for accuracy, precision, test–retest reliability, suitability of CSF model assumptions, and accuracy of amblyopia screening. Results. qCSF estimates obtained with as few as 50 trials matched those obtained with 300 Ψ trials. The precision of qCSF estimates obtained with 120 and 130 trials, in normal subjects and amblyopes, matched the precision of 300 Ψ trials. For both groups and both methods, test–retest sensitivity estimates were well matched (all R > 0.94). The qCSF model assumptions were valid for 8 of 10 normal participants and all amblyopic participants. Measures of the area under log CSF (AULCSF) and the cutoff spatial frequency (cutSF) were lower in the amblyopia group; these differences were captured within 50 qCSF trials. Amblyopia was detected at an approximately 80% correct rate in 50 trials, when a logistic regression model was used with AULCSF and cutSF as predictors. Conclusions. The qCSF method is sufficiently rapid, accurate, and precise in measuring CSFs in normal and amblyopic persons. It has great potential for clinical practice. PMID:20484592
Nualkaekul, Sawaminee; Salmeron, Ivan; Charalampopoulos, Dimitris
2011-12-01
The survival of Bifidobacterium longum NCIMB 8809 was studied during refrigerated storage for 6weeks in model solutions, based on which a mathematical model was constructed describing cell survival as a function of pH, citric acid, protein and dietary fibre. A Central Composite Design (CCD) was developed studying the influence of four factors at three levels, i.e., pH (3.2-4), citric acid (2-15g/l), protein (0-10g/l), and dietary fibre (0-8g/l). In total, 31 experimental runs were carried out. Analysis of variance (ANOVA) of the regression model demonstrated that the model fitted well the data. From the regression coefficients it was deduced that all four factors had a statistically significant (P<0.05) negative effect on the log decrease [log10N0 week-log10N6 week], with the pH and citric acid being the most influential ones. Cell survival during storage was also investigated in various types of juices, including orange, grapefruit, blackcurrant, pineapple, pomegranate and strawberry. The highest cell survival (less than 0.4log decrease) after 6weeks of storage was observed in orange and pineapple, both of which had a pH of about 3.8. Although the pH of grapefruit and blackcurrant was similar (pH ∼3.2), the log decrease of the former was ∼0.5log, whereas of the latter was ∼0.7log. One reason for this could be the fact that grapefruit contained a high amount of citric acid (15.3g/l). The log decrease in pomegranate and strawberry juices was extremely high (∼8logs). The mathematical model was able to predict adequately the cell survival in orange, grapefruit, blackcurrant, and pineapple juices. However, the model failed to predict the cell survival in pomegranate and strawberry, most likely due to the very high levels of phenolic compounds in these two juices. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chronic Kidney Disease Is Associated With White Matter Hyperintensity Volume
Khatri, Minesh; Wright, Clinton B.; Nickolas, Thomas L.; Yoshita, Mitsuhiro; Paik, Myunghee C.; Kranwinkel, Grace; Sacco, Ralph L.; DeCarli, Charles
2010-01-01
Background and Purpose White matter hyperintensities have been associated with increased risk of stroke, cognitive decline, and dementia. Chronic kidney disease is a risk factor for vascular disease and has been associated with inflammation and endothelial dysfunction, which have been implicated in the pathogenesis of white matter hyperintensities. Few studies have explored the relationship between chronic kidney disease and white matter hyperintensities. Methods The Northern Manhattan Study is a prospective, community-based cohort of which a subset of stroke-free participants underwent MRIs. MRIs were analyzed quantitatively for white matter hyperintensities volume, which was log-transformed to yield a normal distribution (log-white matter hyperintensity volume). Kidney function was modeled using serum creatinine, the Cockcroft-Gault formula for creatinine clearance, and the Modification of Diet in Renal Disease formula for estimated glomerular filtration rate. Creatinine clearance and estimated glomerular filtration rate were trichotomized to 15 to 60 mL/min, 60 to 90 mL/min, and >90 mL/min (reference). Linear regression was used to measure the association between kidney function and log-white matter hyperintensity volume adjusting for age, gender, race–ethnicity, education, cardiac disease, diabetes, homocysteine, and hypertension. Results Baseline data were available on 615 subjects (mean age 70 years, 60% women, 18% whites, 21% blacks, 62% Hispanics). In multivariate analysis, creatinine clearance 15 to 60 mL/min was associated with increased log-white matter hyperintensity volume (β 0.322; 95% CI, 0.095 to 0.550) as was estimated glomerular filtration rate 15 to 60 mL/min (β 0.322; 95% CI, 0.080 to 0.564). Serum creatinine, per 1-mg/dL increase, was also positively associated with log-white matter hyperintensity volume (β 1.479; 95% CI, 1.067 to 2.050). Conclusions The association between moderate–severe chronic kidney disease and white matter hyperintensity volume highlights the growing importance of kidney disease as a possible determinant of cerebrovascular disease and/or as a marker of microangiopathy. PMID:17962588
Foss, A.; Cree, I.; Dolin, P.; Hungerford, J.
1999-01-01
BACKGROUND/AIM—There has been no consistent pattern reported on how mortality for uveal melanoma varies with age. This information can be useful to model the complexity of the disease. The authors have examined ocular cancer trends, as an indirect measure for uveal melanoma mortality, to see how rates vary with age and to compare the results with their other studies on predicting metastatic disease. METHODS—Age specific mortality was examined for England and Wales, the USA, and Canada. A log-log model was fitted to the data. The slopes of the log-log plots were used as measure of disease complexity and compared with the results of previous work on predicting metastatic disease. RESULTS—The log-log model provided a good fit for the US and Canadian data, but the observed rates deviated for England and Wales among people over the age of 65 years. The log-log model for mortality data suggests that the underlying process depends upon four rate limiting steps, while a similar model for the incidence data suggests between three and four rate limiting steps. Further analysis of previous data on predicting metastatic disease on the basis of tumour size and blood vessel density would indicate a single rate limiting step between developing the primary tumour and developing metastatic disease. CONCLUSIONS—There is significant underreporting or underdiagnosis of ocular melanoma for England and Wales in those over the age of 65 years. In those under the age of 65, a model is presented for ocular melanoma oncogenesis requiring three rate limiting steps to develop the primary tumour and a fourth rate limiting step to develop metastatic disease. The three steps in the generation of the primary tumour involve two key processes—namely, growth and angiogenesis within the primary tumour. The step from development of the primary to development of metastatic disease is likely to involve a single rate limiting process. PMID:10216060
Digital simulation of a communication link for Pioneer Saturn Uranus atmospheric entry probe, part 1
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1975-01-01
A digital simulation study is presented for a candidate modulator/demodulator design in an atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the conditions of an outer planet atmospheric probe. The simulation results indicate that the mean channel error rate with and without scintillation are similar to theoretical characterizations of the link. The simulation gives information for calculating other channel statistics and generates a quantized symbol stream on magnetic tape from which error correction decoding is analyzed. Results from the magnetic tape data analyses are also included. The receiver and bit synchronizer are modeled in the simulation at the level of hardware component parameters rather than at the loop equation level and individual hardware parameters are identified. The atmospheric scintillation amplitude and phase are modeled independently. Normal and log normal amplitude processes are studied. In each case the scintillations are low pass filtered. The receiver performance is given for a range of signal to noise ratios with and without the effects of scintillation. The performance is reviewed for critical reciever parameter variations.
A Brief Hydrodynamic Investigation of a 1/24-Scale Model of the DR-77 Seaplane
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Hoffman, Edward L.
1953-01-01
A limited investigation of a 1/24-scale dynamically similar model of the Navy Bureau of Aeronautics DR-77 design was conducted in Langley tank no. 2 to determine the calm-water take-off and the rough-water landing characteristics of the design with particular regard to the take-off resistance and the landing accelerations. During the take-off tests, resistance, trim, and rise were measured and photographs were taken to study spray. During the landing tests, motion-picture records and normal-acceleration records were obtained. A ratio of gross load to maximum resistance of 3.2 was obtained with a 30 deg. dead-rise hydro-ski installation. The maximum normal accelerations obtained with a 30 deg. dead-rise hydro-ski installation were of the order of 8g to log in waves 8 feet high (full scale). A yawing instability that occurred just prior to hydro-ski emergence was improved by adding an afterbody extension, but adding the extension reduced the ratio of gross load to maximum resistance to 2.9.
The log-periodic-AR(1)-GARCH(1,1) model for financial crashes
NASA Astrophysics Data System (ADS)
Gazola, L.; Fernandes, C.; Pizzinga, A.; Riera, R.
2008-02-01
This paper intends to meet recent claims for the attainment of more rigorous statistical methodology within the econophysics literature. To this end, we consider an econometric approach to investigate the outcomes of the log-periodic model of price movements, which has been largely used to forecast financial crashes. In order to accomplish reliable statistical inference for unknown parameters, we incorporate an autoregressive dynamic and a conditional heteroskedasticity structure in the error term of the original model, yielding the log-periodic-AR(1)-GARCH(1,1) model. Both the original and the extended models are fitted to financial indices of U. S. market, namely S&P500 and NASDAQ. Our analysis reveal two main points: (i) the log-periodic-AR(1)-GARCH(1,1) model has residuals with better statistical properties and (ii) the estimation of the parameter concerning the time of the financial crash has been improved.
Spectral Density of Laser Beam Scintillation in Wind Turbulence. Part 1; Theory
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1997-01-01
The temporal spectral density of the log-amplitude scintillation of a laser beam wave due to a spatially dependent vector-valued crosswind (deterministic as well as random) is evaluated. The path weighting functions for normalized spectral moments are derived, and offer a potential new technique for estimating the wind velocity profile. The Tatarskii-Klyatskin stochastic propagation equation for the Markov turbulence model is used with the solution approximated by the Rytov method. The Taylor 'frozen-in' hypothesis is assumed for the dependence of the refractive index on the wind velocity, and the Kolmogorov spectral density is used for the refractive index field.
Robust efficient estimation of heart rate pulse from video.
Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde
2014-04-01
We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity.
Robust efficient estimation of heart rate pulse from video
Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde
2014-01-01
We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294
The Tail Exponent for Stock Returns in Bursa Malaysia for 2003-2008
NASA Astrophysics Data System (ADS)
Rusli, N. H.; Gopir, G.; Usang, M. D.
2010-07-01
A developed discipline of econophysics that has been introduced is exhibiting the application of mathematical tools that are usually applied to the physical models for the study of financial models. In this study, an analysis of the time series behavior of several blue chip and penny stock companies in Main Market of Bursa Malaysia has been performed. Generally, the basic quantity being used is the relative price changes or is called the stock price returns, contains daily-sampled data from the beginning of 2003 until the end of 2008, containing 1555 trading days recorded. The aim of this paper is to investigate the tail exponent in tails of the distribution for blue chip stocks and penny stocks financial returns in six years period. By using a standard regression method, it is found that the distribution performed double scaling on the log-log plot of the cumulative probability of the normalized returns. Thus we calculate α for a small scale return as well as large scale return. Based on the result obtained, it is found that the power-law behavior for the probability density functions of the stock price absolute returns P(z)˜z-α with values lying inside and outside the Lévy stable regime with values α>2. All the results were discussed in detail.
2013-01-01
Background The objectives of this study were to assess the patterns of treatment seeking behaviour for children under five with malaria; and to examine the statistical relationship between out-of-pocket expenditure (OOP) on malaria treatment for under-fives and source of treatment, place of residence, education and wealth characteristics of Uganda households. OOP expenditure on health care is now a development concern due to its negative effect on households’ ability to finance consumption of other basic needs. Methods The 2009 Uganda Malaria Indicator Survey was the source of data on treatment seeking behaviour for under-five children with malaria, and patterns and levels of OOP expenditure for malaria treatment. Binomial logit and Log-lin regression models were estimated. In logit model the dependent variable was a dummy (1=incurred some OOP, 0=none incurred) and independent variables were wealth quintiles, rural versus urban, place of treatment, education level, sub-region, and normal duty disruption. The dependent variable in Log-lin model was natural logarithm of OOP and the independent variables were the same as mentioned above. Results Five key descriptive analysis findings emerge. First, malaria is quite prevalent at 44.7% among children below the age of five. Second, a significant proportion seeks treatment (81.8%). Third, private providers are the preferred option for the under-fives for the treatment of malaria. Fourth, the majority pay about 70.9% for either consultation, medicines, transport or hospitalization but the biggest percent of those who pay, do so for medicines (54.0%). Fifth, hospitalization is the most expensive at an average expenditure of US$7.6 per child, even though only 2.9% of those that seek treatment are hospitalized. The binomial logit model slope coefficients for the variables richest wealth quintile, Private facility as first source of treatment, and sub-regions Central 2, East central, Mid-eastern, Mid-western, and Normal duties disrupted were positive and statistically significant at 99% level of confidence. On the other hand, the Log-lin model slope coefficients for Traditional healer, Sought treatment from one source, Primary educational level, North East, Mid Northern and West Nile variables had a negative sign and were statistically significant at 95% level of confidence. Conclusion The fact that OOP expenditure is still prevalent and private provider is the preferred choice, increasing public provision may not be the sole answer. Plans to improve malaria treatment should explicitly incorporate efforts to protect households from high OOP expenditures. This calls for provision of subsidies to enable the private sector to reduce prices, regulation of prices of malaria medicines, and reduction/removal of import duties on such medicines. PMID:23721217
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, S; Ho, M; Chen, C
Purpose: The use of log files to perform patient specific quality assurance for both protons and IMRT has been established. Here, we extend that approach to a proprietary log file format and compare our results to measurements in phantom. Our goal was to generate a system that would permit gross errors to be found within 3 fractions until direct measurements. This approach could eventually replace direct measurements. Methods: Spot scanning protons pass through multi-wire ionization chambers which provide information about the charge, location, and size of each delivered spot. We have generated a program that calculates the dose in phantommore » from these log files and compares the measurements with the plan. The program has 3 different spot shape models: single Gaussian, double Gaussian and the ASTROID model. The program was benchmarked across different treatment sites for 23 patients and 74 fields. Results: The dose calculated from the log files were compared to those generate by the treatment planning system (Raystation). While the dual Gaussian model often gave better agreement, overall, the ASTROID model gave the most consistent results. Using a 5%–3 mm gamma with a 90% passing criteria and excluding doses below 20% of prescription all patient samples passed. However, the degree of agreement of the log file approach was slightly worse than that of the chamber array measurement approach. Operationally, this implies that if the beam passes the log file model, it should pass direct measurement. Conclusion: We have established and benchmarked a model for log file QA in an IBA proteus plus system. The choice of optimal spot model for a given class of patients may be affected by factors such as site, field size, and range shifter and will be investigated further.« less
Friesen, Melissa C; Demers, Paul A; Spinelli, John J; Lorenzi, Maria F; Le, Nhu D
2007-04-01
The association between coal tar-derived substances, a complex mixture of polycyclic aromatic hydrocarbons, and cancer is well established. However, the specific aetiological agents are unknown. To compare the dose-response relationships for two common measures of coal tar-derived substances, benzene-soluble material (BSM) and benzo(a)pyrene (BaP), and to evaluate which among these is more strongly related to the health outcomes. The study population consisted of 6423 men with > or =3 years of work experience at an aluminium smelter (1954-97). Three health outcomes identified from national mortality and cancer databases were evaluated: incidence of bladder cancer (n = 90), incidence of lung cancer (n = 147) and mortality due to acute myocardial infarction (AMI, n = 184). The shape, magnitude and precision of the dose-response relationships and cumulative exposure levels for BSM and BaP were evaluated. Two model structures were assessed, where 1n(relative risk) increased with cumulative exposure (log-linear model) or with log-transformed cumulative exposure (log-log model). The BaP and BSM cumulative exposure metrics were highly correlated (r = 0.94). The increase in model precision using BaP over BSM was 14% for bladder cancer and 5% for lung cancer; no difference was observed for AMI. The log-linear BaP model provided the best fit for bladder cancer. The log-log dose-response models, where risk of disease plateaus at high exposure levels, were the best-fitting models for lung cancer and AMI. BaP and BSM were both strongly associated with bladder and lung cancer and modestly associated with AMI. Similar conclusions regarding the associations could be made regardless of the exposure metric.
Duan, Zhi; Hansen, Terese Holst; Hansen, Tina Beck; Dalgaard, Paw; Knøchel, Susanne
2016-08-02
With low temperature long time (LTLT) cooking it can take hours for meat to reach a final core temperature above 53°C and germination followed by growth of Clostridium perfringens is a concern. Available and new growth data in meats including 154 lag times (tlag), 224 maximum specific growth rates (μmax) and 25 maximum population densities (Nmax) were used to developed a model to predict growth of C. perfringens during the coming-up time of LTLT cooking. New data were generate in 26 challenge tests with chicken (pH6.8) and pork (pH5.6) at two different slowly increasing temperature (SIT) profiles (10°C to 53°C) followed by 53°C in up to 30h in total. Three inoculum types were studied including vegetative cells, non-heated spores and heat activated (75°C, 20min) spores of C. perfringens strain 790-94. Concentrations of vegetative cells in chicken increased 2 to 3logCFU/g during the SIT profiles. Similar results were found for non-heated and heated spores in chicken, whereas in pork C. perfringens 790-94 increased less than 1logCFU/g. At 53°C C. perfringens 790-94 was log-linearly inactivated. Observed and predicted concentrations of C. perfringens, at the time when 53°C (log(N53)) was reached, were used to evaluate the new growth model and three available predictive models previously published for C. perfringens growth during cooling rather than during SIT profiles. Model performance was evaluated by using mean deviation (MD), mean absolute deviation (MAD) and the acceptable simulation zone (ASZ) approach with a zone of ±0.5logCFU/g. The new model showed best performance with MD=0.27logCFU/g, MAD=0.66logCFU/g and ASZ=67%. The two growth models that performed best, were used together with a log-linear inactivation model and D53-values from the present study to simulate the behaviour of C. perfringens under the fast and slow SIT profiles investigated in the present study. Observed and predicted concentrations were compared using a new fail-safe acceptable zone (FSAZ) method. FSAZ was defined as the predicted concentration of C. perfringens plus 0.5logCFU/g. If at least 85% of the observed log-counts were below the FSAZ, the model was considered fail-safe. The two models showed similar performance but none of them performed satisfactorily for all conditions. It is recommended to use the models without a lag phase until more precise lag time models become available. Copyright © 2016 Elsevier B.V. All rights reserved.
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-01-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-06-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.
Fuzzy inference system for identification of geological stratigraphy off Prydz Bay, East Antarctica
NASA Astrophysics Data System (ADS)
Singh, Upendra K.
2011-12-01
The analysis of well logging data plays key role in the exploration and development of hydrocarbon reservoirs. Various well log parameters such as porosity, gamma ray, density, transit time and resistivity, help in classification of strata and estimation of the physical, electrical and acoustical properties of the subsurface lithology. Strong and conspicuous changes in some of the log parameters associated with any particular geological stratigraphy formation are function of its composition, physical properties that help in classification. However some substrata show moderate values in respective log parameters and make difficult to identify the kind of strata, if we go by the standard variability ranges of any log parameters and visual inspection. The complexity increases further with more number of sensors involved. An attempt is made to identify the kinds of stratigraphy from well logs over Prydz bay basin, East Antarctica using fuzzy inference system. A model is built based on few data sets of known stratigraphy and further the network model is used as test model to infer the lithology of a borehole from their geophysical logs, not used in simulation. Initially the fuzzy based algorithm is trained, validated and tested on well log data and finally identifies the formation lithology of a hydrocarbon reservoir system of study area. The effectiveness of this technique is demonstrated by the analysis of the results for actual lithologs and coring data of ODP Leg 188. The fuzzy results show that the training performance equals to 82.95% while the prediction ability is 87.69%. The fuzzy results are very encouraging and the model is able to decipher even thin layer seams and other strata from geophysical logs. The result provides the significant sand formation of depth range 316.0- 341.0 m, where core recovery is incomplete.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Speed, spatial, and temporal tuning of rod and cone vision in mouse.
Umino, Yumiko; Solessio, Eduardo; Barlow, Robert B
2008-01-02
Rods and cones subserve mouse vision over a 100 million-fold range of light intensity (-6 to 2 log cd m(-2)). Rod pathways tune vision to the temporal frequency of stimuli (peak, 0.75 Hz) and cone pathways to their speed (peak, approximately 12 degrees/s). Both pathways tune vision to the spatial components of stimuli (0.064-0.128 cycles/degree). The specific photoreceptor contributions were determined by two-alternative, forced-choice measures of contrast thresholds for optomotor responses of C57BL/6J mice with normal vision, Gnat2(cpfl3) mice without functional cones, and Gnat1-/- mice without functional rods. Gnat2(cpfl3) mice (threshold, -6.0 log cd m(-2)) cannot see rotating gratings above -2.0 log cd m(-2) (photopic vision), and Gnat1-/- mice (threshold, -4.0 log cd m(-2)) are blind below -4.0 log cd m(-2) (scotopic vision). Both genotypes can see in the transitional mesopic range (-4.0 to -2.0 log cd m(-2)). Mouse rod and cone sensitivities are similar to those of human. This parametric study characterizes the functional properties of the mouse visual system, revealing the rod and cone contributions to contrast sensitivity and to the temporal processing of visual stimuli.
Daniel A. Yaussy
1989-01-01
Multivariate regression models were developed to predict green board-foot yields (1 board ft. = 2.360 dm 3) for the standard factory lumber grades processed from black cherry (Prunus serotina Ehrh.) and red maple (Acer rubrum L.) factory grade logs sawed at band and circular sawmills. The models use log...
Models of Compensation (MODCOMP): Policy Analyses and Unemployment Effects
2008-08-01
423206 | | N = 11954 N0= 6895 N1= 5059 | | LogL = -7024.00448 LogL0 = -8144.3273 | | Estrella = 1-(L/L0)^(-2L0/n) = .18262...11954 N0= 6895 N1= 5059 | | LogL = -6808.94442 LogL0 = -8144.3273 | | Estrella = 1-(L/L0)^(-2L0/n) = .21653...11954 N0= 6895 N1= 5059 | | LogL = -6823.04000 LogL0 = -8144.3273 | | Estrella = 1-(L/L0)^(-2L0/n) = .21432
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
NASA Astrophysics Data System (ADS)
Tian, K.; Gosvami, N. N.; Goldsby, D. L.; Carpick, R. W.
2015-12-01
Rate and state friction (RSF) laws are empirical relationships that describe the frictional behavior of rocks and other materials in experiments, and reproduce a variety of observed natural behavior when employed in earthquake models. A pervasive observation from rock friction experiments is the linear increase of static friction with the log of contact time, or 'ageing'. Ageing is usually attributed to an increase in real area of contact associated with asperity creep. However, recent atomic force microscopy (AFM) experiments demonstrate that ageing of nanoscale silica-silica contacts is due to progressive formation of interfacial chemical bonds in the absence of plastic deformation, in a manner consistent with the multi-contact ageing behavior of rocks [Li et al., 2011]. To further investigate chemical bonding-induced ageing, we explored the influence of normal load (and thus contact normal stress) and contact time on ageing. Experiments that mimic slide-hold-slide rock friction experiments were conducted in the AFM for contact loads and hold times ranging from 23 to 393 nN and 0.1 to 100 s, respectively, all in humid air (~50% RH) at room temperature. Experiments were conducted by sequentially sliding the AFM tip on the sample at a velocity V of 0.5 μm/s, setting V to zero and holding the tip stationary for a given time, and finally resuming sliding at 0.5 μm/s to yield a peak value of friction followed by a drop to the sliding friction value. Chemical bonding-induced ageing, as measured by the peak friction minus the sliding friction, increases approximately linearly with the product of normal load and the log of the hold time. Theoretical studies of the roles of reaction energy barriers in nanoscale ageing indicate that frictional ageing depends on the total number of reaction sites and the hold time [Liu & Szlufarska, 2012]. We combine chemical kinetics analyses with contact mechanics models to explain our results, and develop a new approach for curve fitting ageing vs. load data which shows that the friction drop data points all fall on a master curve. The analysis yields physically reasonable values for the activation energy and activation volume of the chemical bonding process. Our study provides a basis to hypothesize that the kinetic processes in chemical bonding-induced ageing do not depend strongly on normal load.
Evaluating phenanthrene sorption on various wood chars
James, G.; Sabatini, D.A.; Chiou, C.T.; Rutherford, D.; Scott, A.C.; Karapanagioti, H.K.
2005-01-01
A certain amount of wood char or soot in a soil or sediment sample may cause the sorption of organic compounds to deviate significantly from the linear partitioning commonly observed with soil organic matter (SOM). Laboratory produced and field wood chars have been obtained and analyzed for their sorption isotherms of a model solute (phenanthrene) from water solution. The uptake capacities and nonlinear sorption effects with the laboratory wood chars are similar to those with the field wood chars. For phenanthrene aqueous concentrations of 1 ??gl-1, the organic carbon-normalized sorption coefficients (log Koc) ranging from 5.0 to 6.4 for field chars and 5.4-7.3 for laboratory wood chars, which is consistent with literature values (5.6-7.1). Data with artificial chars suggest that the variation in sorption potential can be attributed to heating temperature and starting material, and both the quantity and heterogeneity of surface-area impacts the sorption capacity. These results thus help to corroborate and explain the range of log Koc values reported in previous research for aquifer materials containing wood chars. ?? 2004 Elsevier Ltd. All rights reserved.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Grain size distribution in sheared polycrystals
NASA Astrophysics Data System (ADS)
Sarkar, Tanmoy; Biswas, Santidan; Chaudhuri, Pinaki; Sain, Anirban
2017-12-01
Plastic deformation in solids induced by external stresses is of both fundamental and practical interest. Using both phase field crystal modeling and molecular dynamics simulations, we study the shear response of monocomponent polycrystalline solids. We subject mesocale polycrystalline samples to constant strain rates in a planar Couette flow geometry for studying its plastic flow, in particular its grain deformation dynamics. As opposed to equilibrium solids where grain dynamics is mainly driven by thermal diffusion, external stress/strain induce a much higher level of grain deformation activity in the form of grain rotation, coalescence, and breakage, mediated by dislocations. Despite this, the grain size distribution of this driven system shows only a weak power-law correction to its equilibrium log-normal behavior. We interpret the grain reorganization dynamics using a stochastic model.
Log polar image sensor in CMOS technology
NASA Astrophysics Data System (ADS)
Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou
1996-08-01
We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
Comparison of star formation rates from Hα and infrared luminosity as seen by Herschel
NASA Astrophysics Data System (ADS)
Domínguez Sánchez, H.; Mignoli, M.; Pozzi, F.; Calura, F.; Cimatti, A.; Gruppioni, C.; Cepa, J.; Sánchez Portal, M.; Zamorani, G.; Berta, S.; Elbaz, D.; Le Floc'h, E.; Granato, G. L.; Lutz, D.; Maiolino, R.; Matteucci, F.; Nair, P.; Nordon, R.; Pozzetti, L.; Silva, L.; Silverman, J.; Wuyts, S.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Lilly, S. J.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Bolzonella, M.; Bongiorno, A.; Caputi, K.; Coppa, G.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Lamareille, F.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Magnelli, B.; Pelló, R.; Peng, Y.; Perez-Montero, E.; Ricciardelli, E.; Riguccini, L.; Tanaka, M.; Tasca, L. A. M.; Tresse, L.; Vergani, D.; Zucca, E.
2012-10-01
We empirically MD test the relation between the SFR(LIR) derived from the infrared luminosity, LIR, and the SFR(Hα) derived from the Hα emission line luminosity using simple conversion relations. We use a sample of 474 galaxies at z = 0.06-0.46 with both Hα detection [from 20k redshift Cosmological Evolution (zCOSMOS) survey] and new far-IR Herschel data (100 and 160 μm). We derive SFR(Hα) from the Hα extinction corrected emission line luminosity. We find a very clear trend between E(B - V) and LIR that allows us to estimate extinction values for each galaxy even if the Hβ emission line measurement is not reliable. We calculate the LIR by integrating from 8 up to 1000 μm the spectral energy distribution (SED) that is best fitting our data. We compare the SFR(Hα) with the SFR(LIR). We find a very good agreement between the two star formation rate (SFR) estimates, with a slope of m = 1.01 ± 0.03 in the log SFR(LIR) versus log SFR(Hα) diagram, a normalization constant of a = -0.08 ± 0.03 and a dispersion of σ = 0.28 dex. We study the effect of some intrinsic properties of the galaxies in the SFR(LIR)-SFR(Hα) relation, such as the redshift, the mass, the specific star formation rate (SSFR) or the metallicity. The metallicity is the parameter that affects most the SFR comparison. The mean ratio of the two SFR estimators log[SFR(LIR)/SFR(Hα)] varies by ˜0.6 dex from metal-poor to metal-rich galaxies [8.1 < log (O/H) + 12 < 9.2]. This effect is consistent with the prediction of a theoretical model for the dust evolution in spiral galaxies. Considering different morphological types, we find a very good agreement between the two SFR indicators for the Sa, Sb and Sc morphologically classified galaxies, both in slope and in normalization. For the Sd, irregular sample (Sd/Irr), the formal best-fitting slope becomes much steeper (m = 1.62 ± 0.43), but it is still consistent with 1 at the 1.5σ level, because of the reduced statistics of this sub-sample. Herschel is a European Space Agency (ESA) space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Narrow log-periodic modulations in non-Markovian random walks
NASA Astrophysics Data System (ADS)
Diniz, R. M. B.; Cressoni, J. C.; da Silva, M. A. A.; Mariz, A. M.; de Araújo, J. M.
2017-12-01
What are the necessary ingredients for log-periodicity to appear in the dynamics of a random walk model? Can they be subtle enough to be overlooked? Previous studies suggest that long-range damaged memory and negative feedback together are necessary conditions for the emergence of log-periodic oscillations. The role of negative feedback would then be crucial, forcing the system to change direction. In this paper we show that small-amplitude log-periodic oscillations can emerge when the system is driven by positive feedback. Due to their very small amplitude, these oscillations can easily be mistaken for numerical finite-size effects. The models we use consist of discrete-time random walks with strong memory correlations where the decision process is taken from memory profiles based either on a binomial distribution or on a delta distribution. Anomalous superdiffusive behavior and log-periodic modulations are shown to arise in the large time limit for convenient choices of the models parameters.
Electronic Warfare M-on-N Digital Simulation Logging Requirements and HDF5: A Preliminary Analysis
2017-04-12
LOGGING STREAM The goal of this report is to investigate logging of EW simulations not at the level of implementation in a database management ...differences of the logging stream and relational models. A hierarchical navigation query style appears very natural for our application. Yet the
Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean
NASA Astrophysics Data System (ADS)
Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim
2006-07-01
A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Flame surface statistics of constant-pressure turbulent expanding premixed flames
NASA Astrophysics Data System (ADS)
Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.
2014-04-01
In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.
The missing impact craters on Venus
NASA Technical Reports Server (NTRS)
Speidel, D. H.
1993-01-01
The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.
Empirical study of the tails of mutual fund size
NASA Astrophysics Data System (ADS)
Schwarzkopf, Yonathan; Farmer, J. Doyne
2010-06-01
The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.
Competing regression models for longitudinal data.
Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M
2012-03-01
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng
2013-11-01
A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.
Radium-226 content of beverages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiefer, J.
Radium contents of commercially obtained beer, wine, milk and mineral waters were measured. All distributions were log-normal with the following geometrical mean values: beer: 2.1 X 10(-2) Bq L-1; wine: 3.4 X 10(-2) Bq L-1; milk: 3 X 10(-3) Bq L-1; normal mineral water: 4.3 X 10(-2) L-1; medical mineral water: 9.4 X 10(-2) Bq L-1.
Non-Rayleigh Sea Clutter: Properties and Detection of Targets
1976-06-25
subject should consult Guinard and Daley [7], which provides an overview of the theory and references all the I______.... important work. 6 * .-- - - S...results for scattering from slightly rough surfaces and composite surfaces obtained by Rice [1], Wright [2,3], Valenzuela [4-6], Guinard and Daley [7], and...for vertical polarization. In 1970, Trunk and George [10] considered the log-normal and contaminated-normal descriptions of sea clutter and calculated
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Li, Q; Li, W; Huang, Y; Chen, L
2016-11-01
The gamma-glutamyl transpeptidase-to-platelet ratio (GPR) is a new serum diagnostic model, which is reported to be more accurate than aspartate transaminase-to-platelet ratio index (APRI) and fibrosis index based on the four factors (Fib-4) for the diagnosis of significant fibrosis and cirrhosis in chronic HBV infection (CHBVI) patients in West Africa. To evaluate the performance of the GPR model for the diagnosis of liver fibrosis and cirrhosis in HBeAg-positive CHBVI patients with high HBV DNA (≥5 log 10 copies/mL) and normal or mildly elevated alanine transaminase (ALT) (≤2 times upper limit of normal (ULN)) in China. A total of 1521 consecutive CHBVI patients who underwent liver biopsies and routine laboratory tests were retrospectively screened. Of these patients, 401 treatment naïve HBeAg-positive patients with HBV DNA≥5 log 10 copies/mL and ALT≤2 ULN were included. The METAVIR scoring system was adopted as the pathological diagnosis standard of liver fibrosis. Using liver histology as a gold standard, the performances of GPR, APRI, and Fib-4 for the diagnosis of liver fibrosis and cirrhosis were evaluated and compared by receiver operating characteristic (ROC) curves and the area under the ROC curves (AUROCs). Of 401 patients, 121 (30.2%), 49 (12.2%) and 17 (4.2%) were classified as having significant fibrosis (≥F2), severe fibrosis (≥F3) and cirrhosis (=F4), respectively. After estimating the AUROC to predict significant fibrosis, the performance of GPR (AUROC=0.66, 95% CI 0.60-0.72) was higher than APRI (AUROC=0.58, 95% CI 0.52-0.64, P=.002) and Fib-4 scores (AUROC=0.54, 95% CI 0.47-0.60, P<.001). After estimating the AUROC to predict severe fibrosis, the performance of GPR (AUROC=0.71, 95% CI 0.63-0.80) was also higher than APRI (AUROC=0.65, 95% CI 0.56-0.73, P=.003) and Fib-4 scores (AUROC=0.67, 95% CI 0.58-0.75, P=.001). After estimating the AUROC to predict cirrhosis, the performance of GPR (AUROC=0.73, 95% CI 0.56-0.88) was higher than APRI (AUROC=0.69, 95% CI 0.54-0.83, P=.041) and Fib-4 scores (AUROC=0.69, 95% CI 0.55-0.82, P=.012) too. The GPR is a new serum model for the diagnosis of liver fibrosis and cirrhosis and shows obvious advantages in Chinese HBeAg-positive patients with HBV DNA≥5 log 10 copies/mL and ALT≤2 ULN compared with APRI and Fib-4, thus warranting its widespread use for this specific population. © 2016 John Wiley & Sons Ltd.
Bucking logs to cable yarder capacity can decrease yarding costs and minimize wood wastage
Chris B. LeDoux
1986-01-01
Data from select time and motions studies and a forest model plot, used in a simulation model, show that logging managers planning felling, bucking, and limbing for a cable yarding operation must consider the effect of alternate bucking rules on wood wastage, yarding production rates and cost, the number of choker to fly and total logging costs. Results emphasize then...
C.B. LeDoux; J.E. Baumgras
1991-01-01
The impact of selected site and stand attributes on stand management is demonstrated using actual forest model plot data and a complete systems simulation model called MANAGE. The influence of terrain on the type of logging technology required to log a stand and the resulting impact on stand management is also illustrated. The results can be used by managers and...
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post missed NDAS cycles since 1 Apr 1995 Log of NAM model code changes Log of NAM model test runs Problems and Prediction (NCWCP) 5830 University Research Court College Park, MD 20740 Page Author: EMC Webmaster Page
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
Vinson, C C; Kanashiro, M; Sebbenn, A M; Williams, T C R; Harris, S A; Boshier, D H
2015-08-01
The impact of logging and subsequent recovery after logging is predicted to vary depending on specific life history traits of the logged species. The Eco-gene simulation model was used to evaluate the long-term impacts of selective logging over 300 years on two contrasting Brazilian Amazon tree species, Dipteryx odorata and Jacaranda copaia. D. odorata (Leguminosae), a slow growing climax tree, occurs at very low densities, whereas J. copaia (Bignoniaceae) is a fast growing pioneer tree that occurs at high densities. Microsatellite multilocus genotypes of the pre-logging populations were used as data inputs for the Eco-gene model and post-logging genetic data was used to verify the output from the simulations. Overall, under current Brazilian forest management regulations, there were neither short nor long-term impacts on J. copaia. By contrast, D. odorata cannot be sustainably logged under current regulations, a sustainable scenario was achieved by increasing the minimum cutting diameter at breast height from 50 to 100 cm over 30-year logging cycles. Genetic parameters were only slightly affected by selective logging, with reductions in the numbers of alleles and single genotypes. In the short term, the loss of alleles seen in J. copaia simulations was the same as in real data, whereas fewer alleles were lost in D. odorata simulations than in the field. The different impacts and periods of recovery for each species support the idea that ecological and genetic information are essential at species, ecological guild or reproductive group levels to help derive sustainable management scenarios for tropical forests.
Vinson, C C; Kanashiro, M; Sebbenn, A M; Williams, T CR; Harris, S A; Boshier, D H
2015-01-01
The impact of logging and subsequent recovery after logging is predicted to vary depending on specific life history traits of the logged species. The Eco-gene simulation model was used to evaluate the long-term impacts of selective logging over 300 years on two contrasting Brazilian Amazon tree species, Dipteryx odorata and Jacaranda copaia. D. odorata (Leguminosae), a slow growing climax tree, occurs at very low densities, whereas J. copaia (Bignoniaceae) is a fast growing pioneer tree that occurs at high densities. Microsatellite multilocus genotypes of the pre-logging populations were used as data inputs for the Eco-gene model and post-logging genetic data was used to verify the output from the simulations. Overall, under current Brazilian forest management regulations, there were neither short nor long-term impacts on J. copaia. By contrast, D. odorata cannot be sustainably logged under current regulations, a sustainable scenario was achieved by increasing the minimum cutting diameter at breast height from 50 to 100 cm over 30-year logging cycles. Genetic parameters were only slightly affected by selective logging, with reductions in the numbers of alleles and single genotypes. In the short term, the loss of alleles seen in J. copaia simulations was the same as in real data, whereas fewer alleles were lost in D. odorata simulations than in the field. The different impacts and periods of recovery for each species support the idea that ecological and genetic information are essential at species, ecological guild or reproductive group levels to help derive sustainable management scenarios for tropical forests. PMID:24424164
Evaluation of estimation methods for organic carbon normalized sorption coefficients
Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.
1997-01-01
A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.
An empirical model for estimating annual consumption by freshwater fish populations
Liao, H.; Pierce, C.L.; Larscheid, J.G.
2005-01-01
Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.
Black Hole Variability in MHD: A Numerical Test of the Propagating Fluctuations Model
NASA Astrophysics Data System (ADS)
Hogg, J. Drew; Reynolds, Christopher S.
2017-08-01
The variability properties of accreting black hole systems offer a crucial probe of the accretion physics providing the angular momentum transport and enabling the mass accretion. A few of the most telling signatures are the characteristic log-normal flux distributions, linear RMS-flux relations, and frequency-dependent time lags between energy bands. These commonly observed properties are often interpreted as evidence of inward propagating mass accretion rate fluctuations where fluctuations in the accretion flow combine multiplicatively. We present recent results from a long, semi-global MHD simulation of a thin (h/r=0.1) accretion disk that naturally reproduces this phenomenology. This bolsters the theoretical underpinnings of the “propagating fluctuations” model and demonstrates the viability of this process manifesting in MHD turbulence driven by the magnetorotational instability. We find that a key ingredient to this model is the modulation of the effective α parameter by the magnetic dynamo.
Rupert, C.P.; Miller, C.T.
2008-01-01
We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Do wealth distributions follow power laws? Evidence from ‘rich lists’
NASA Astrophysics Data System (ADS)
Brzezinski, Michal
2014-07-01
We use data on the wealth of the richest persons taken from the 'rich lists' provided by business magazines like Forbes to verify if the upper tails of wealth distributions follow, as often claimed, a power-law behaviour. The data sets used cover the world's richest persons over 1996-2012, the richest Americans over 1988-2012, the richest Chinese over 2006-2012, and the richest Russians over 2004-2011. Using a recently introduced comprehensive empirical methodology for detecting power laws, which allows for testing the goodness of fit as well as for comparing the power-law model with rival distributions, we find that a power-law model is consistent with data only in 35% of the analysed data sets. Moreover, even if wealth data are consistent with the power-law model, they are usually also consistent with some rivals like the log-normal or stretched exponential distributions.
Behavioral evaluation of visual function of rats using a visual discrimination apparatus.
Thomas, Biju B; Samant, Deedar M; Seiler, Magdalene J; Aramant, Robert B; Sheikholeslami, Sharzad; Zhang, Kevin; Chen, Zhenhai; Sadda, SriniVas R
2007-05-15
A visual discrimination apparatus was developed to evaluate the visual sensitivity of normal pigmented rats (n=13) and S334ter-line-3 retinal degenerate (RD) rats (n=15). The apparatus is a modified Y maze consisting of two chambers leading to the rats' home cage. Rats were trained to find a one-way exit door leading into their home cage, based on distinguishing between two different visual alternatives (either a dark background or black and white stripes at varying luminance levels) which were randomly displayed on the back of each chamber. Within 2 weeks of training, all rats were able to distinguish between these two visual patterns. The discrimination threshold of normal pigmented rats was a luminance level of -5.37+/-0.05 log cd/m(2); whereas the threshold level of 100-day-old RD rats was -1.14+/-0.09 log cd/m(2) with considerable variability in performance. When tested at a later age (about 150 days), the threshold level of RD rats was significantly increased (-0.82+/-0.09 log cd/m(2), p<0.03, paired t-test). This apparatus could be useful to train rats at a very early age to distinguish between two different visual stimuli and may be effective for visual functional evaluations following therapeutic interventions.
Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J
2016-04-01
Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
Vucicevic, J; Popovic, M; Nikolic, K; Filipic, S; Obradovic, D; Agbaba, D
2017-03-01
For this study, 31 compounds, including 16 imidazoline/α-adrenergic receptor (IRs/α-ARs) ligands and 15 central nervous system (CNS) drugs, were characterized in terms of the retention factors (k) obtained using biopartitioning micellar and classical reversed phase chromatography (log k BMC and log k wRP , respectively). Based on the retention factor (log k wRP ) and slope of the linear curve (S) the isocratic parameter (φ 0 ) was calculated. Obtained retention factors were correlated with experimental log BB values for the group of examined compounds. High correlations were obtained between logarithm of biopartitioning micellar chromatography (BMC) retention factor and effective permeability (r(log k BMC /log BB): 0.77), while for RP-HPLC system the correlations were lower (r(log k wRP /log BB): 0.58; r(S/log BB): -0.50; r(φ 0 /P e ): 0.61). Based on the log k BMC retention data and calculated molecular parameters of the examined compounds, quantitative structure-permeability relationship (QSPR) models were developed using partial least squares, stepwise multiple linear regression, support vector machine and artificial neural network methodologies. A high degree of structural diversity of the analysed IRs/α-ARs ligands and CNS drugs provides wide applicability domain of the QSPR models for estimation of blood-brain barrier penetration of the related compounds.
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
Song, M; Ouyang, Z; Liu, Z L
2009-05-01
Composed of linear difference equations, a discrete dynamical system (DDS) model was designed to reconstruct transcriptional regulations in gene regulatory networks (GRNs) for ethanologenic yeast Saccharomyces cerevisiae in response to 5-hydroxymethylfurfural (HMF), a bioethanol conversion inhibitor. The modelling aims at identification of a system of linear difference equations to represent temporal interactions among significantly expressed genes. Power stability is imposed on a system model under the normal condition in the absence of the inhibitor. Non-uniform sampling, typical in a time-course experimental design, is addressed by a log-time domain interpolation. A statistically significant DDS model of the yeast GRN derived from time-course gene expression measurements by exposure to HMF, revealed several verified transcriptional regulation events. These events implicate Yap1 and Pdr3, transcription factors consistently known for their regulatory roles by other studies or postulated by independent sequence motif analysis, suggesting their involvement in yeast tolerance and detoxification of the inhibitor.
NASA Astrophysics Data System (ADS)
Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas
2018-02-01
Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.
NASA Technical Reports Server (NTRS)
Bigger, J. T. Jr; Steinman, R. C.; Rolnitzky, L. M.; Fleiss, J. L.; Albrecht, P.; Cohen, R. J.
1996-01-01
BACKGROUND. The purposes of the present study were (1) to establish normal values for the regression of log(power) on log(frequency) for, RR-interval fluctuations in healthy middle-aged persons, (2) to determine the effects of myocardial infarction on the regression of log(power) on log(frequency), (3) to determine the effect of cardiac denervation on the regression of log(power) on log(frequency), and (4) to assess the ability of power law regression parameters to predict death after myocardial infarction. METHODS AND RESULTS. We studied three groups: (1) 715 patients with recent myocardial infarction; (2) 274 healthy persons age and sex matched to the infarct sample; and (3) 19 patients with heart transplants. Twenty-four-hour RR-interval power spectra were computed using fast Fourier transforms and log(power) was regressed on log(frequency) between 10(-4) and 10(-2) Hz. There was a power law relation between log(power) and log(frequency). That is, the function described a descending straight line that had a slope of approximately -1 in healthy subjects. For the myocardial infarction group, the regression line for log(power) on log(frequency) was shifted downward and had a steeper negative slope (-1.15). The transplant (denervated) group showed a larger downward shift in the regression line and a much steeper negative slope (-2.08). The correlation between traditional power spectral bands and slope was weak, and that with log(power) at 10(-4) Hz was only moderate. Slope and log(power) at 10(-4) Hz were used to predict mortality and were compared with the predictive value of traditional power spectral bands. Slope and log(power) at 10(-4) Hz were excellent predictors of all-cause mortality or arrhythmic death. To optimize the prediction of death, we calculated a log(power) intercept that was uncorrelated with the slope of the power law regression line. We found that the combination of slope and zero-correlation log(power) was an outstanding predictor, with a relative risk of > 10, and was better than any combination of the traditional power spectral bands. The combination of slope and log(power) at 10(-4) Hz also was an excellent predictor of death after myocardial infarction. CONCLUSIONS. Myocardial infarction or denervation of the heart causes a steeper slope and decreased height of the power law regression relation between log(power) and log(frequency) of RR-interval fluctuations. Individually and, especially, combined, the power law regression parameters are excellent predictors of death of any cause or arrhythmic death and predict these outcomes better than the traditional power spectral bands.
A methodological framework to assess the carbon balance of tropical managed forests.
Piponiot, Camille; Cabon, Antoine; Descroix, Laurent; Dourdain, Aurélie; Mazzei, Lucas; Ouliac, Benjamin; Rutishauser, Ervan; Sist, Plinio; Hérault, Bruno
2016-12-01
Managed forests are a major component of tropical landscapes. Production forests as designated by national forest services cover up to 400 million ha, i.e. half of the forested area in the humid tropics. Forest management thus plays a major role in the global carbon budget, but with a lack of unified method to estimate carbon fluxes from tropical managed forests. In this study we propose a new time- and spatially-explicit methodology to estimate the above-ground carbon budget of selective logging at regional scale. The yearly balance of a logging unit, i.e. the elementary management unit of a forest estate, is modelled by aggregating three sub-models encompassing (i) emissions from extracted wood, (ii) emissions from logging damage and deforested areas and (iii) carbon storage from post-logging recovery. Models are parametrised and uncertainties are propagated through a MCMC algorithm. As a case study, we used 38 years of National Forest Inventories in French Guiana, northeastern Amazonia, to estimate the above-ground carbon balance (i.e. the net carbon exchange with the atmosphere) of selectively logged forests. Over this period, the net carbon balance of selective logging in the French Guianan Permanent Forest Estate is estimated to be comprised between 0.12 and 1.33 Tg C, with a median value of 0.64 Tg C. Uncertainties over the model could be diminished by improving the accuracy of both logging damage and large woody necromass decay submodels. We propose an innovating carbon accounting framework relying upon basic logging statistics. This flexible tool allows carbon budget of tropical managed forests to be estimated in a wide range of tropical regions.
Compacting biomass waste materials for use as fuel
NASA Astrophysics Data System (ADS)
Zhang, Ou
Every year, biomass waste materials are produced in large quantity. The combustibles in biomass waste materials make up over 70% of the total waste. How to utilize these waste materials is important to the nation and the world. The purpose of this study is to test optimum processes and conditions of compacting a number of biomass waste materials to form a densified solid fuel for use at coal-fired power plants or ordinary commercial furnaces. Successful use of such fuel as a substitute for or in cofiring with coal not only solves a solid waste disposal problem but also reduces the release of some gases from burning coal which cause health problem, acid rain and global warming. The unique punch-and-die process developed at the Capsule Pipeline Research Center, University of Missouri-Columbia was used for compacting the solid wastes, including waste paper, plastics (both film and hard products), textiles, leaves, and wood. The compaction was performed to produce strong compacts (biomass logs) under room temperature without binder and without preheating. The compaction conditions important to the commercial production of densified biomass fuel logs, including compaction pressure, pressure holding time, back pressure, moisture content, particle size, binder effects, and mold conditions were studied and optimized. The properties of the biomass logs were evaluated in terms of physical, mechanical, and combustion characteristics. It was found that the compaction pressure and the initial moisture content of the biomass material play critical roles in producing high-quality biomass logs. Under optimized compaction conditions, biomass waste materials can be compacted into high-quality logs with a density of 0.8 to 1.2 g/cm3. The logs made from the combustible wastes have a heating value in the range 6,000 to 8,000 Btu/lb which is only slightly (10 to 30%) less than that of subbituminous coal. To evaluate the feasibility of cofiring biomass logs with coal, burn tests were conducted in a stoke boiler. A separate burning test was also carried out by burning biomass logs alone in an outdoor hot-water furnace for heating a building. Based on a previous coal compaction study, the process of biomass compaction was studied numerically by use of a non-linear finite element code. A constitutive model with sufficient generality was adapted for biomass material to deal with pore contraction during compaction. A contact node algorithm was applied to implement the effect of mold wall friction into the finite element program. Numerical analyses were made to investigate the pressure distribution in a die normal to the axis of compaction, and to investigate the density distribution in a biomass log after compaction. The results of the analyses gave generally good agreement with theoretical analysis of coal log compaction, although assumptions had to be made about the variation in the elastic modulus of the material and the Poisson's ratio during the compaction cycle.
RAYSAW: a log sawing simulator for 3D laser-scanned hardwood logs
R. Edward Thomas
2013-01-01
Laser scanning of hardwood logs provides detailed high-resolution imagery of log surfaces. Characteristics such as sweep, taper, and crook, as well as most surface defects, are visible to the eye in the scan data. In addition, models have been developed that predict interior knot size and position based on external defect information. Computerized processing of...
Density of large snags and logs in northern Arizona mixed-conifer and ponderosa pine forests
Joseph L. Ganey; Benjamin J. Bird; L. Scott Baggett; Jeffrey S. Jenness
2015-01-01
Large snags and logs provide important biological legacies and resources for native wildlife, yet data on populations of large snags and logs and factors influencing those populations are sparse. We monitored populations of large snags and logs in mixed-conifer and ponderosa pine (Pinus ponderosa) forests in northern Arizona from 1997 through 2012. We modeled density...
Lumber values from computerized simulation of hardwood log sawing
D.B. Richards; W.K. Adkins; H. Hallock; E.H. Bulgrin
1980-01-01
Computer simulation sawing programs were used to study the sawing of mathematical models of hardwood logs by me live sawing and three 4-sided sawing methods. One of the 4-sided methods simulated "grade sawing" by sawing each successive board from the log face with the highest potential grade. Logs from 10 through 28 inches in diameter were sawn. In addition,...
A stable computation of log-derivatives from noisy drawdown data
NASA Astrophysics Data System (ADS)
Ramos, Gustavo; Carrera, Jesus; Gómez, Susana; Minutti, Carlos; Camacho, Rodolfo
2017-09-01
Pumping tests interpretation is an art that involves dealing with noise coming from multiple sources and conceptual model uncertainty. Interpretation is greatly helped by diagnostic plots, which include drawdown data and their derivative with respect to log-time, called log-derivative. Log-derivatives are especially useful to complement geological understanding in helping to identify the underlying model of fluid flow because they are sensitive to subtle variations in the response to pumping of aquifers and oil reservoirs. The main problem with their use lies in the calculation of the log-derivatives themselves, which may display fluctuations when data are noisy. To overcome this difficulty, we propose a variational regularization approach based on the minimization of a functional consisting of two terms: one ensuring that the computed log-derivatives honor measurements and one that penalizes fluctuations. The minimization leads to a diffusion-like differential equation in the log-derivatives, and boundary conditions that are appropriate for well hydraulics (i.e., radial flow, wellbore storage, fractal behavior, etc.). We have solved this equation by finite differences. We tested the methodology on two synthetic examples showing that a robust solution is obtained. We also report the resulting log-derivative for a real case.
CAG repeat expansion in Huntington disease determines age at onset in a fully dominant fashion
Lee, J.-M.; Ramos, E.M.; Lee, J.-H.; Gillis, T.; Mysore, J.S.; Hayden, M.R.; Warby, S.C.; Morrison, P.; Nance, M.; Ross, C.A.; Margolis, R.L.; Squitieri, F.; Orobello, S.; Di Donato, S.; Gomez-Tortosa, E.; Ayuso, C.; Suchowersky, O.; Trent, R.J.A.; McCusker, E.; Novelletto, A.; Frontali, M.; Jones, R.; Ashizawa, T.; Frank, S.; Saint-Hilaire, M.H.; Hersch, S.M.; Rosas, H.D.; Lucente, D.; Harrison, M.B.; Zanko, A.; Abramson, R.K.; Marder, K.; Sequeiros, J.; Paulsen, J.S.; Landwehrmeyer, G.B.; Myers, R.H.; MacDonald, M.E.; Durr, Alexandra; Rosenblatt, Adam; Frati, Luigi; Perlman, Susan; Conneally, Patrick M.; Klimek, Mary Lou; Diggin, Melissa; Hadzi, Tiffany; Duckett, Ayana; Ahmed, Anwar; Allen, Paul; Ames, David; Anderson, Christine; Anderson, Karla; Anderson, Karen; Andrews, Thomasin; Ashburner, John; Axelson, Eric; Aylward, Elizabeth; Barker, Roger A.; Barth, Katrin; Barton, Stacey; Baynes, Kathleen; Bea, Alexandra; Beall, Erik; Beg, Mirza Faisal; Beglinger, Leigh J.; Biglan, Kevin; Bjork, Kristine; Blanchard, Steve; Bockholt, Jeremy; Bommu, Sudharshan Reddy; Brossman, Bradley; Burrows, Maggie; Calhoun, Vince; Carlozzi, Noelle; Chesire, Amy; Chiu, Edmond; Chua, Phyllis; Connell, R.J.; Connor, Carmela; Corey-Bloom, Jody; Craufurd, David; Cross, Stephen; Cysique, Lucette; Santos, Rachelle Dar; Davis, Jennifer; Decolongon, Joji; DiPietro, Anna; Doucette, Nicholas; Downing, Nancy; Dudler, Ann; Dunn, Steve; Ecker, Daniel; Epping, Eric A.; Erickson, Diane; Erwin, Cheryl; Evans, Ken; Factor, Stewart A.; Farias, Sarah; Fatas, Marta; Fiedorowicz, Jess; Fullam, Ruth; Furtado, Sarah; Garde, Monica Bascunana; Gehl, Carissa; Geschwind, Michael D.; Goh, Anita; Gooblar, Jon; Goodman, Anna; Griffith, Jane; Groves, Mark; Guttman, Mark; Hamilton, Joanne; Harrington, Deborah; Harris, Greg; Heaton, Robert K.; Helmer, Karl; Henneberry, Machelle; Hershey, Tamara; Herwig, Kelly; Howard, Elizabeth; Hunter, Christine; Jankovic, Joseph; Johnson, Hans; Johnson, Arik; Jones, Kathy; Juhl, Andrew; Kim, Eun Young; Kimble, Mycah; King, Pamela; Klimek, Mary Lou; Klöppel, Stefan; Koenig, Katherine; Komiti, Angela; Kumar, Rajeev; Langbehn, Douglas; Leavitt, Blair; Leserman, Anne; Lim, Kelvin; Lipe, Hillary; Lowe, Mark; Magnotta, Vincent A.; Mallonee, William M.; Mans, Nicole; Marietta, Jacquie; Marshall, Frederick; Martin, Wayne; Mason, Sarah; Matheson, Kirsty; Matson, Wayne; Mazzoni, Pietro; McDowell, William; Miedzybrodzka, Zosia; Miller, Michael; Mills, James; Miracle, Dawn; Montross, Kelsey; Moore, David; Mori, Sasumu; Moser, David J.; Moskowitz, Carol; Newman, Emily; Nopoulos, Peg; Novak, Marianne; O'Rourke, Justin; Oakes, David; Ondo, William; Orth, Michael; Panegyres, Peter; Pease, Karen; Perlman, Susan; Perlmutter, Joel; Peterson, Asa; Phillips, Michael; Pierson, Ron; Potkin, Steve; Preston, Joy; Quaid, Kimberly; Radtke, Dawn; Rae, Daniela; Rao, Stephen; Raymond, Lynn; Reading, Sarah; Ready, Rebecca; Reece, Christine; Reilmann, Ralf; Reynolds, Norm; Richardson, Kylie; Rickards, Hugh; Ro, Eunyoe; Robinson, Robert; Rodnitzky, Robert; Rogers, Ben; Rosenblatt, Adam; Rosser, Elisabeth; Rosser, Anne; Price, Kathy; Price, Kathy; Ryan, Pat; Salmon, David; Samii, Ali; Schumacher, Jamy; Schumacher, Jessica; Sendon, Jose Luis Lópenz; Shear, Paula; Sheinberg, Alanna; Shpritz, Barnett; Siedlecki, Karen; Simpson, Sheila A.; Singer, Adam; Smith, Jim; Smith, Megan; Smith, Glenn; Snyder, Pete; Song, Allen; Sran, Satwinder; Stephan, Klaas; Stober, Janice; Sü?muth, Sigurd; Suter, Greg; Tabrizi, Sarah; Tempkin, Terry; Testa, Claudia; Thompson, Sean; Thomsen, Teri; Thumma, Kelli; Toga, Arthur; Trautmann, Sonja; Tremont, Geoff; Turner, Jessica; Uc, Ergun; Vaccarino, Anthony; van Duijn, Eric; Van Walsem, Marleen; Vik, Stacie; Vonsattel, Jean Paul; Vuletich, Elizabeth; Warner, Tom; Wasserman, Paula; Wassink, Thomas; Waterman, Elijah; Weaver, Kurt; Weir, David; Welsh, Claire; Werling-Witkoske, Chris; Wesson, Melissa; Westervelt, Holly; Weydt, Patrick; Wheelock, Vicki; Williams, Kent; Williams, Janet; Wodarski, Mary; Wojcieszek, Joanne; Wood, Jessica; Wood-Siverio, Cathy; Wu, Shuhua; Yastrubetskaya, Olga; de Yebenes, Justo Garcia; Zhao, Yong Qiang; Zimbelman, Janice; Zschiegner, Roland; Aaserud, Olaf; Abbruzzese, Giovanni; Andrews, Thomasin; Andrich, Jurgin; Antczak, Jakub; Arran, Natalie; Artiga, Maria J. Saiz; Bachoud-Lévi, Anne-Catherine; Banaszkiewicz, Krysztof; di Poggio, Monica Bandettini; Bandmann, Oliver; Barbera, Miguel A.; Barker, Roger A.; Barrero, Francisco; Barth, Katrin; Bas, Jordi; Beister, Antoine; Bentivoglio, Anna Rita; Bertini, Elisabetta; Biunno, Ida; Bjørgo, Kathrine; Bjørnevoll, Inga; Bohlen, Stefan; Bonelli, Raphael M.; Bos, Reineke; Bourne, Colin; Bradbury, Alyson; Brockie, Peter; Brown, Felicity; Bruno, Stefania; Bryl, Anna; Buck, Andrea; Burg, Sabrina; Burgunder, Jean-Marc; Burns, Peter; Burrows, Liz; Busquets, Nuria; Busse, Monica; Calopa, Matilde; Carruesco, Gemma T.; Casado, Ana Gonzalez; Catena, Judit López; Chu, Carol; Ciesielska, Anna; Clapton, Jackie; Clayton, Carole; Clenaghan, Catherine; Coelho, Miguel; Connemann, Julia; Craufurd, David; Crooks, Jenny; Cubillo, Patricia Trigo; Cubo, Esther; Curtis, Adrienne; De Michele, Giuseppe; De Nicola, A.; de Souza, Jenny; de Weert, A. Marit; de Yébenes, Justo Garcia; Dekker, M.; Descals, A. Martínez; Di Maio, Luigi; Di Pietro, Anna; Dipple, Heather; Dose, Matthias; Dumas, Eve M.; Dunnett, Stephen; Ecker, Daniel; Elifani, F.; Ellison-Rose, Lynda; Elorza, Marina D.; Eschenbach, Carolin; Evans, Carole; Fairtlough, Helen; Fannemel, Madelein; Fasano, Alfonso; Fenollar, Maria; Ferrandes, Giovanna; Ferreira, Jaoquim J.; Fillingham, Kay; Finisterra, Ana Maria; Fisher, K.; Fletcher, Amy; Foster, Jillian; Foustanos, Isabella; Frech, Fernando A.; Fullam, Robert; Fullham, Ruth; Gago, Miguel; García, RocioGarcía-Ramos; García, Socorro S.; Garrett, Carolina; Gellera, Cinzia; Gill, Paul; Ginestroni, Andrea; Golding, Charlotte; Goodman, Anna; Gørvell, Per; Grant, Janet; Griguoli, A.; Gross, Diana; Guedes, Leonor; BascuñanaGuerra, Monica; Guerra, Maria Rosalia; Guerrero, Rosa; Guia, Dolores B.; Guidubaldi, Arianna; Hallam, Caroline; Hamer, Stephanie; Hammer, Kathrin; Handley, Olivia J.; Harding, Alison; Hasholt, Lis; Hedge, Reikha; Heiberg, Arvid; Heinicke, Walburgis; Held, Christine; Hernanz, Laura Casas; Herranhof, Briggitte; Herrera, Carmen Durán; Hidding, Ute; Hiivola, Heli; Hill, Susan; Hjermind, Lena. E.; Hobson, Emma; Hoffmann, Rainer; Holl, Anna Hödl; Howard, Liz; Hunt, Sarah; Huson, Susan; Ialongo, Tamara; Idiago, Jesus Miguel R.; Illmann, Torsten; Jachinska, Katarzyna; Jacopini, Gioia; Jakobsen, Oda; Jamieson, Stuart; Jamrozik, Zygmunt; Janik, Piotr; Johns, Nicola; Jones, Lesley; Jones, Una; Jurgens, Caroline K.; Kaelin, Alain; Kalbarczyk, Anna; Kershaw, Ann; Khalil, Hanan; Kieni, Janina; Klimberg, Aneta; Koivisto, Susana P.; Koppers, Kerstin; Kosinski, Christoph Michael; Krawczyk, Malgorzata; Kremer, Berry; Krysa, Wioletta; Kwiecinski, Hubert; Lahiri, Nayana; Lambeck, Johann; Lange, Herwig; Laver, Fiona; Leenders, K.L.; Levey, Jamie; Leythaeuser, Gabriele; Lezius, Franziska; Llesoy, Joan Roig; Löhle, Matthias; López, Cristobal Diez-Aja; Lorenza, Fortuna; Loria, Giovanna; Magnet, Markus; Mandich, Paola; Marchese, Roberta; Marcinkowski, Jerzy; Mariotti, Caterina; Mariscal, Natividad; Markova, Ivana; Marquard, Ralf; Martikainen, Kirsti; Martínez, Isabel Haro; Martínez-Descals, Asuncion; Martino, T.; Mason, Sarah; McKenzie, Sue; Mechi, Claudia; Mendes, Tiago; Mestre, Tiago; Middleton, Julia; Milkereit, Eva; Miller, Joanne; Miller, Julie; Minster, Sara; Möller, Jens Carsten; Monza, Daniela; Morales, Blas; Moreau, Laura V.; Moreno, Jose L. López-Sendón; Münchau, Alexander; Murch, Ann; Nielsen, Jørgen E.; Niess, Anke; Nørremølle, Anne; Novak, Marianne; O'Donovan, Kristy; Orth, Michael; Otti, Daniela; Owen, Michael; Padieu, Helene; Paganini, Marco; Painold, Annamaria; Päivärinta, Markku; Partington-Jones, Lucy; Paterski, Laurent; Paterson, Nicole; Patino, Dawn; Patton, Michael; Peinemann, Alexander; Peppa, Nadia; Perea, Maria Fuensanta Noguera; Peterson, Maria; Piacentini, Silvia; Piano, Carla; Càrdenas, Regina Pons i; Prehn, Christian; Price, Kathleen; Probst, Daniela; Quarrell, Oliver; Quiroga, Purificacion Pin; Raab, Tina; Rakowicz, Maryla; Raman, Ashok; Raymond, Lucy; Reilmann, Ralf; Reinante, Gema; Reisinger, Karin; Retterstol, Lars; Ribaï, Pascale; Riballo, Antonio V.; Ribas, Guillermo G.; Richter, Sven; Rickards, Hugh; Rinaldi, Carlo; Rissling, Ida; Ritchie, Stuart; Rivera, Susana Vázquez; Robert, Misericordia Floriach; Roca, Elvira; Romano, Silvia; Romoli, Anna Maria; Roos, Raymond A.C.; Røren, Niini; Rose, Sarah; Rosser, Elisabeth; Rosser, Anne; Rossi, Fabiana; Rothery, Jean; Rudzinska, Monika; Ruíz, Pedro J. García; Ruíz, Belan Garzon; Russo, Cinzia Valeria; Ryglewicz, Danuta; Saft, Carston; Salvatore, Elena; Sánchez, Vicenta; Sando, Sigrid Botne; Šašinková, Pavla; Sass, Christian; Scheibl, Monika; Schiefer, Johannes; Schlangen, Christiane; Schmidt, Simone; Schöggl, Helmut; Schrenk, Caroline; Schüpbach, Michael; Schuierer, Michele; Sebastián, Ana Rojo; Selimbegovic-Turkovic, Amina; Sempolowicz, Justyna; Silva, Mark; Sitek, Emilia; Slawek, Jaroslaw; Snowden, Julie; Soleti, Francesco; Soliveri, Paola; Sollom, Andrea; Soltan, Witold; Sorbi, Sandro; Sorensen, Sven Asger; Spadaro, Maria; Städtler, Michael; Stamm, Christiane; Steiner, Tanja; Stokholm, Jette; Stokke, Bodil; Stopford, Cheryl; Storch, Alexander; Straßburger, Katrin; Stubbe, Lars; Sulek, Anna; Szczudlik, Andrzej; Tabrizi, Sarah; Taylor, Rachel; Terol, Santiago Duran-Sindreu; Thomas, Gareth; Thompson, Jennifer; Thomson, Aileen; Tidswell, Katherine; Torres, Maria M. Antequera; Toscano, Jean; Townhill, Jenny; Trautmann, Sonja; Tucci, Tecla; Tuuha, Katri; Uhrova, Tereza; Valadas, Anabela; van Hout, Monique S.E.; van Oostrom, J.C.H.; van Vugt, Jeroen P.P.; vanm, Walsem Marleen R.; Vandenberghe, Wim; Verellen-Dumoulin, Christine; Vergara, Mar Ruiz; Verstappen, C.C.P.; Verstraelen, Nichola; Viladrich, Celia Mareca; Villanueva, Clara; Wahlström, Jan; Warner, Thomas; Wehus, Raghild; Weindl, Adolf; Werner, Cornelius J.; Westmoreland, Leann; Weydt, Patrick; Wiedemann, Alexandra; Wild, Edward; Wild, Sue; Witjes-Ané, Marie-Noelle; Witkowski, Grzegorz; Wójcik, Magdalena; Wolz, Martin; Wolz, Annett; Wright, Jan; Yardumian, Pam; Yates, Shona; Yudina, Elizaveta; Zaremba, Jacek; Zaugg, Sabine W.; Zdzienicka, Elzbieta; Zielonka, Daniel; Zielonka, Euginiusz; Zinzi, Paola; Zittel, Simone; Zucker, Birgrit; Adams, John; Agarwal, Pinky; Antonijevic, Irina; Beck, Christopher; Chiu, Edmond; Churchyard, Andrew; Colcher, Amy; Corey-Bloom, Jody; Dorsey, Ray; Drazinic, Carolyn; Dubinsky, Richard; Duff, Kevin; Factor, Stewart; Foroud, Tatiana; Furtado, Sarah; Giuliano, Joe; Greenamyre, Timothy; Higgins, Don; Jankovic, Joseph; Jennings, Dana; Kang, Un Jung; Kostyk, Sandra; Kumar, Rajeev; Leavitt, Blair; LeDoux, Mark; Mallonee, William; Marshall, Frederick; Mohlo, Eric; Morgan, John; Oakes, David; Panegyres, Peter; Panisset, Michel; Perlman, Susan; Perlmutter, Joel; Quaid, Kimberly; Raymond, Lynn; Revilla, Fredy; Robertson, Suzanne; Robottom, Bradley; Sanchez-Ramos, Juan; Scott, Burton; Shannon, Kathleen; Shoulson, Ira; Singer, Carlos; Tabbal, Samer; Testa, Claudia; van, Kammen Dan; Vetter, Louise; Walker, Francis; Warner, John; Weiner, illiam; Wheelock, Vicki; Yastrubetskaya, Olga; Barton, Stacey; Broyles, Janice; Clouse, Ronda; Coleman, Allison; Davis, Robert; Decolongon, Joji; DeLaRosa, Jeanene; Deuel, Lisa; Dietrich, Susan; Dubinsky, Hilary; Eaton, Ken; Erickson, Diane; Fitzpatrick, Mary Jane; Frucht, Steven; Gartner, Maureen; Goldstein, Jody; Griffith, Jane; Hickey, Charlyne; Hunt, Victoria; Jaglin, Jeana; Klimek, Mary Lou; Lindsay, Pat; Louis, Elan; Loy, Clemet; Lucarelli, Nancy; Malarick, Keith; Martin, Amanda; McInnis, Robert; Moskowitz, Carol; Muratori, Lisa; Nucifora, Frederick; O'Neill, Christine; Palao, Alicia; Peavy, Guerry; Quesada, Monica; Schmidt, Amy; Segro, Vicki; Sperin, Elaine; Suter, Greg; Tanev, Kalo; Tempkin, Teresa; Thiede, Curtis; Wasserman, Paula; Welsh, Claire; Wesson, Melissa; Zauber, Elizabeth
2012-01-01
Objective: Age at onset of diagnostic motor manifestations in Huntington disease (HD) is strongly correlated with an expanded CAG trinucleotide repeat. The length of the normal CAG repeat allele has been reported also to influence age at onset, in interaction with the expanded allele. Due to profound implications for disease mechanism and modification, we tested whether the normal allele, interaction between the expanded and normal alleles, or presence of a second expanded allele affects age at onset of HD motor signs. Methods: We modeled natural log-transformed age at onset as a function of CAG repeat lengths of expanded and normal alleles and their interaction by linear regression. Results: An apparently significant effect of interaction on age at motor onset among 4,068 subjects was dependent on a single outlier data point. A rigorous statistical analysis with a well-behaved dataset that conformed to the fundamental assumptions of linear regression (e.g., constant variance and normally distributed error) revealed significance only for the expanded CAG repeat, with no effect of the normal CAG repeat. Ten subjects with 2 expanded alleles showed an age at motor onset consistent with the length of the larger expanded allele. Conclusions: Normal allele CAG length, interaction between expanded and normal alleles, and presence of a second expanded allele do not influence age at onset of motor manifestations, indicating that the rate of HD pathogenesis leading to motor diagnosis is determined by a completely dominant action of the longest expanded allele and as yet unidentified genetic or environmental factors. Neurology® 2012;78:690–695 PMID:22323755
Lactate Clearance and Normalization and Prolonged Organ Dysfunction in Pediatric Sepsis.
Scott, Halden F; Brou, Lina; Deakyne, Sara J; Fairclough, Diane L; Kempe, Allison; Bajaj, Lalit
2016-03-01
To evaluate whether lactate clearance and normalization during emergency care of pediatric sepsis is associated with lower rates of persistent organ dysfunction. This was a prospective cohort study of 77 children <18 years of age in the emergency department with infection and acute organ dysfunction per consensus definitions. In consented patients, lactate was measured 2 and/or 4 hours after an initial lactate; persistent organ dysfunction was assessed through laboratory and physician evaluation at 48 hours. A decrease of ≥ 10% from initial to final level was considered lactate clearance; a final level < 2 mmol/L was considered lactate normalization. Relative risk (RR) with 95% CIs, adjusted in a log-binomial model, was used to evaluate associations between lactate clearance/normalization and organ dysfunction. Lactate normalized in 62 (81%) patients and cleared in 70 (91%). The primary outcome, persistent 48-hour organ dysfunction, was present in 32 (42%). Lactate normalization was associated with decreased risk of persistent organ dysfunction (RR 0.46, 0.29-0.73; adjusted RR 0.47, 0.29-0.78); lactate clearance was not (RR 0.70, 0.35-1.41; adjusted RR 0.75, 0.38-1.50). The association between lactate normalization and decreased risk of persistent organ dysfunction was retained in the subgroups with initial lactate ≥ 2 mmol/L and hypotension. In children with sepsis and organ dysfunction, lactate normalization within 4 hours was associated with decreased persistent organ dysfunction. Serial lactate level measurement may provide a useful prognostic tool during the first hours of resuscitation in pediatric sepsis. Copyright © 2016 Elsevier Inc. All rights reserved.
Radner, Wolfgang; Radner, Stephan; Raunig, Valerian; Diendorfer, Gabriela
2014-03-01
To evaluate reading performance of patients with monofocal intraocular lenses (IOLs) (Acrysof SN60WF) with or without reading glasses under bright and dim light conditions. Austrian Academy of Ophthalmology, Vienna, Austria. Evaluation of a diagnostic test or technology. In pseudophakic patients, the spherical refractive error was limited to between +0.50 diopter (D) and -0.75 D with astigmatism of 0.75 D (mean spherical equivalent: right eye, -0.08 ± 0.43 [SD]; left eye, -0.15 ± 0.35). Near addition was +2.75 D. Reading performance was assessed binocularly with or without reading glasses at an illumination of 100 candelas (cd)/m(2) and 4 cd/m(2) using the Radner Reading Charts. In the 25 patients evaluated, binocularly, the mean corrected distance visual acuity was -0.07 ± 0.06 logMAR and the mean uncorrected distance visual acuity was 0.01 ± 0.11 logMAR. The mean reading acuity with reading glasses was 0.02 ± 0.10 logRAD at 100 cd/m(2) and 0.12 ± 0.14 logRAD at 4 cd/m(2). Without reading glasses, it was 0.44 ± 0.13 logRAD and 0.56 ± 0.16 logRAD, respectively (P < .05). Without reading glasses and at 100 cd/m(2), 40% of patients read 0.4 logRAD at more than 80 words per minute (wpm), 68% exceeded this limit at 0.5 logRAD, and 92% exceeded it at 0.6 logRAD. The mean reading speed at 0.5 logRAD was 134.76 ± 48.22 wpm; with reading glasses it was 167.65 ± 32.77 wpm (P < .05). A considerable percentage of patients with monofocal IOLs read newspaper print size without glasses under good light conditions. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Spencer, Monique E; Jain, Alka; Matteini, Amy; Beamer, Brock A; Wang, Nae-Yuh; Leng, Sean X; Punjabi, Naresh M; Walston, Jeremy D; Fedarko, Neal S
2010-08-01
Neopterin, a GTP metabolite expressed by macrophages, is a marker of immune activation. We hypothesize that levels of this serum marker alter with donor age, reflecting increased chronic immune activation in normal aging. In addition to age, we assessed gender, race, body mass index (BMI), and percentage of body fat (%fat) as potential covariates. Serum was obtained from 426 healthy participants whose age ranged from 18 to 87 years. Anthropometric measures included %fat and BMI. Neopterin concentrations were measured by competitive ELISA. The paired associations between neopterin and age, BMI, or %fat were analyzed by Spearman's correlation or by linear regression of log-transformed neopterin, whereas overall associations were modeled by multiple regression of log-transformed neopterin as a function of age, gender, race, BMI, %fat, and interaction terms. Across all participants, neopterin exhibited a positive association with age, BMI, and %fat. Multiple regression modeling of neopterin in women and men as a function of age, BMI, and race revealed that each covariate contributed significantly to neopterin values and that optimal modeling required an interaction term between race and BMI. The covariate %fat was highly correlated with BMI and could be substituted for BMI to yield similar regression coefficients. The association of age and gender with neopterin levels and their modification by race, BMI, or %fat reflect the biology underlying chronic immune activation and perhaps gender differences in disease incidence, morbidity, and mortality.
Karimi, Asrin; Delpisheh, Ali; Sayehmiri, Kourosh
2016-01-01
Breast cancer is the most common cancer and the second common cause of cancer-induced mortalities in Iranian women. There has been a rapid development in hazard models and survival analysis in the last decade. The aim of this study was to evaluate the prognostic factors of overall survival (OS) in breast cancer patients using accelerated failure time models (AFT). This was a retrospective-analytic cohort study. About 313 women with a pathologically proven diagnosis of breast cancer who had been treated during a 7-year period (since January 2006 until March 2014) in Sanandaj City, Kurdistan Province of Iran were recruited. Performance among AFT was assessed using the goodness of fit methods. Discrimination among the exponential, Weibull, generalized gamma, log-logistic, and log-normal distributions was done using Akaik information criteria and maximum likelihood. The 5 years OS was 75% (95% CI = 74.57-75.43). The main results in terms of survival were found for the different categories of the clinical stage covariate, tumor metastasis, and relapse of cancer. Survival time in breast cancer patients without tumor metastasis and relapse were 4, 2-fold longer than other patients with metastasis and relapse, respectively. One of the most important undermining prognostic factors in breast cancer is metastasis; hence, knowledge of the mechanisms of metastasis is necessary to prevent it so occurrence and treatment of metastatic breast cancer and ultimately extend the lifetime of patients.
The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.
Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F
2016-10-01
Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.
Ryan, James; Curran, Catherine E.; Hennessy, Emer; Newell, John; Morris, John C.; Kerin, Michael J.; Dwyer, Roisin M.
2011-01-01
Introduction The presence, relevance and regulation of the Sodium Iodide Symporter (NIS) in human mammary tissue remains poorly understood. This study aimed to quantify relative expression of NIS and putative regulators in human breast tissue, with relationships observed further investigated in vitro. Methods Human breast tissue specimens (malignant n = 75, normal n = 15, fibroadenoma n = 10) were analysed by RQ-PCR targeting NIS, receptors for retinoic acid (RARα, RARβ), oestrogen (ERα), thyroid hormones (THRα, THRβ), and also phosphoinositide-3-kinase (PI3K). Breast cancer cells were treated with Retinoic acid (ATRA), Estradiol and Thyroxine individually and in combination followed by analysis of changes in NIS expression. Results The lowest levels of NIS were detected in normal tissue (Mean(SEM) 0.70(0.12) Log10 Relative Quantity (RQ)) with significantly higher levels observed in fibroadenoma (1.69(0.21) Log10RQ, p<0.005) and malignant breast tissue (1.18(0.07) Log10RQ, p<0.05). Significant positive correlations were observed between human NIS and ERα (r = 0.22, p<0.05) and RARα (r = 0.29, p<0.005), with the strongest relationship observed between NIS and RARβ (r = 0.38, p<0.0001). An inverse relationship between NIS and PI3K expression was also observed (r = −0.21, p<0.05). In vitro, ATRA, Estradiol and Thyroxine individually stimulated significant increases in NIS expression (range 6–16 fold), while ATRA and Thyroxine combined caused the greatest increase (range 16–26 fold). Conclusion Although NIS expression is significantly higher in malignant compared to normal breast tissue, the highest level was detected in fibroadenoma. The data presented supports a role for retinoic acid and estradiol in mammary NIS regulation in vivo, and also highlights potential thyroidal regulation of mammary NIS mediated by thyroid hormones. PMID:21283523
Fractal Dimension Analysis of Transient Visual Evoked Potentials: Optimisation and Applications.
Boon, Mei Ying; Henry, Bruce Ian; Chu, Byoung Sun; Basahi, Nour; Suttle, Catherine May; Luu, Chi; Leung, Harry; Hing, Stephen
2016-01-01
The visual evoked potential (VEP) provides a time series signal response to an external visual stimulus at the location of the visual cortex. The major VEP signal components, peak latency and amplitude, may be affected by disease processes. Additionally, the VEP contains fine detailed and non-periodic structure, of presently unclear relevance to normal function, which may be quantified using the fractal dimension. The purpose of this study is to provide a systematic investigation of the key parameters in the measurement of the fractal dimension of VEPs, to develop an optimal analysis protocol for application. VEP time series were mathematically transformed using delay time, τ, and embedding dimension, m, parameters. The fractal dimension of the transformed data was obtained from a scaling analysis based on straight line fits to the numbers of pairs of points with separation less than r versus log(r) in the transformed space. Optimal τ, m, and scaling analysis were obtained by comparing the consistency of results using different sampling frequencies. The optimised method was then piloted on samples of normal and abnormal VEPs. Consistent fractal dimension estimates were obtained using τ = 4 ms, designating the fractal dimension = D2 of the time series based on embedding dimension m = 7 (for 3606 Hz and 5000 Hz), m = 6 (for 1803 Hz) and m = 5 (for 1000Hz), and estimating D2 for each embedding dimension as the steepest slope of the linear scaling region in the plot of log(C(r)) vs log(r) provided the scaling region occurred within the middle third of the plot. Piloting revealed that fractal dimensions were higher from the sampled abnormal than normal achromatic VEPs in adults (p = 0.02). Variances of fractal dimension were higher from the abnormal than normal chromatic VEPs in children (p = 0.01). A useful analysis protocol to assess the fractal dimension of transformed VEPs has been developed.
Contrast Sensitivity Perimetry and Clinical Measures of Glaucomatous Damage
Swanson, William H.; Malinovsky, Victor E.; Dul, Mitchell W.; Malik, Rizwan; Torbit, Julie K.; Sutton, Bradley M.; Horner, Douglas G.
2014-01-01
ABSTRACT Purpose To compare conventional structural and functional measures of glaucomatous damage with a new functional measure—contrast sensitivity perimetry (CSP-2). Methods One eye each was tested for 51 patients with glaucoma and 62 age-similar control subjects using CSP-2, size III 24-2 conventional automated perimetry (CAP), 24-2 frequency-doubling perimetry (FDP), and retinal nerve fiber layer (RNFL) thickness. For superior temporal (ST) and inferior temporal (IT) optic disc sectors, defect depth was computed as amount below mean normal, in log units. Bland-Altman analysis was used to assess agreement on defect depth, using limits of agreement and three indices: intercept, slope, and mean difference. A criterion of p < 0.0014 for significance used Bonferroni correction. Results Contrast sensitivity perimetry-2 and FDP were in agreement for both sectors. Normal variability was lower for CSP-2 than for CAP and FDP (F > 1.69, p < 0.02), and Bland-Altman limits of agreement for patient data were consistent with variability of control subjects (mean difference, −0.01 log units; SD, 0.11 log units). Intercepts for IT indicated that CSP-2 and FDP were below mean normal when CAP was at mean normal (t > 4, p < 0.0005). Slopes indicated that, as sector damage became more severe, CAP defects for IT and ST deepened more rapidly than CSP-2 defects (t > 4.3, p < 0.0005) and RNFL defects for ST deepened more slowly than for CSP, FDP, and CAP. Mean differences indicated that FDP defects for ST and IT were on average deeper than RNFL defects, as were CSP-2 defects for ST (t > 4.9, p < 0.0001). Conclusions Contrast sensitivity perimetry-2 and FDP defects were deeper than CAP defects in optic disc sectors with mild damage and revealed greater residual function in sectors with severe damage. The discordance between different measures of glaucomatous damage can be accounted for by variability in people free of disease. PMID:25259758
Contrast sensitivity perimetry and clinical measures of glaucomatous damage.
Swanson, William H; Malinovsky, Victor E; Dul, Mitchell W; Malik, Rizwan; Torbit, Julie K; Sutton, Bradley M; Horner, Douglas G
2014-11-01
To compare conventional structural and functional measures of glaucomatous damage with a new functional measure-contrast sensitivity perimetry (CSP-2). One eye each was tested for 51 patients with glaucoma and 62 age-similar control subjects using CSP-2, size III 24-2 conventional automated perimetry (CAP), 24-2 frequency-doubling perimetry (FDP), and retinal nerve fiber layer (RNFL) thickness. For superior temporal (ST) and inferior temporal (IT) optic disc sectors, defect depth was computed as amount below mean normal, in log units. Bland-Altman analysis was used to assess agreement on defect depth, using limits of agreement and three indices: intercept, slope, and mean difference. A criterion of p < 0.0014 for significance used Bonferroni correction. Contrast sensitivity perimetry-2 and FDP were in agreement for both sectors. Normal variability was lower for CSP-2 than for CAP and FDP (F > 1.69, p < 0.02), and Bland-Altman limits of agreement for patient data were consistent with variability of control subjects (mean difference, -0.01 log units; SD, 0.11 log units). Intercepts for IT indicated that CSP-2 and FDP were below mean normal when CAP was at mean normal (t > 4, p < 0.0005). Slopes indicated that, as sector damage became more severe, CAP defects for IT and ST deepened more rapidly than CSP-2 defects (t > 4.3, p < 0.0005) and RNFL defects for ST deepened more slowly than for CSP, FDP, and CAP. Mean differences indicated that FDP defects for ST and IT were on average deeper than RNFL defects, as were CSP-2 defects for ST (t > 4.9, p < 0.0001). Contrast sensitivity perimetry-2 and FDP defects were deeper than CAP defects in optic disc sectors with mild damage and revealed greater residual function in sectors with severe damage. The discordance between different measures of glaucomatous damage can be accounted for by variability in people free of disease.
Devos, Stefanie; Cox, Bianca; van Lier, Tom; Nawrot, Tim S; Putman, Koen
2016-09-01
We used log-linear and log-log exposure-response (E-R) functions to model the association between PM2.5 exposure and non-elective hospitalizations for pneumonia, and estimated the attributable hospital costs by using the effect estimates obtained from both functions. We used hospital discharge data on 3519 non-elective pneumonia admissions from UZ Brussels between 2007 and 2012 and we combined a case-crossover design with distributed lag models. The annual averted pneumonia hospitalization costs for a reduction in PM2.5 exposure from the mean (21.4μg/m(3)) to the WHO guideline for annual mean PM2.5 (10μg/m(3)) were estimated and extrapolated for Belgium. Non-elective hospitalizations for pneumonia were significantly associated with PM2.5 exposure in both models. Using a log-linear E-R function, the estimated risk reduction for pneumonia hospitalization associated with a decrease in mean PM2.5 exposure to 10μg/m(3) was 4.9%. The corresponding estimate for the log-log model was 10.7%. These estimates translate to an annual pneumonia hospital cost saving in Belgium of €15.5 million and almost €34 million for the log-linear and log-log E-R function, respectively. Although further research is required to assess the shape of the association between PM2.5 exposure and pneumonia hospitalizations, we demonstrated that estimates for health effects and associated costs heavily depend on the assumed E-R function. These results are important for policy making, as supra-linear E-R associations imply that significant health benefits may still be obtained from additional pollution control measures in areas where PM levels have already been reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.
Security middleware infrastructure for DICOM images in health information systems.
Kallepalli, Vijay N V; Ehikioya, Sylvanus A; Camorlinga, Sergio; Rueda, Jose A
2003-12-01
In health care, it is mandatory to maintain the privacy and confidentiality of medical data. To achieve this, a fine-grained access control and an access log for accessing medical images are two important aspects that need to be considered in health care systems. Fine-grained access control provides access to medical data only to authorized persons based on priority, location, and content. A log captures each attempt to access medical data. This article describes an overall middleware infrastructure required for secure access to Digital Imaging and Communication in Medicine (DICOM) images, with an emphasis on access control and log maintenance. We introduce a hybrid access control model that combines the properties of two existing models. A trust relationship between hospitals is used to make the hybrid access control model scalable across hospitals. We also discuss events that have to be logged and where the log has to be maintained. A prototype of security middleware infrastructure is implemented.
Warfarin: history, tautomerism and activity
NASA Astrophysics Data System (ADS)
Porter, William R.
2010-06-01
The anticoagulant drug warfarin, normally administered as the racemate, can exist in solution in potentially as many as 40 topologically distinct tautomeric forms. Only 11 of these forms for each enantiomer can be distinguished by selected computational software commonly used to estimate octanol-water partition coefficients and/or ionization constants. The history of studies on warfarin tautomerism is reviewed, along with the implications of tautomerism to its biological properties (activity, protein binding and metabolism) and chemical properties (log P, log D, p K a). Experimental approaches to assessing warfarin tautomerism and computational results for different tautomeric forms are presented.
Paillet, Frederick L.; Crowder, R.E.
1996-01-01
Quantitative analysis of geophysical logs in ground-water studies often involves at least as broad a range of applications and variation in lithology as is typically encountered in petroleum exploration, making such logs difficult to calibrate and complicating inversion problem formulation. At the same time, data inversion and analysis depend on inversion model formulation and refinement, so that log interpretation cannot be deferred to a geophysical log specialist unless active involvement with interpretation can be maintained by such an expert over the lifetime of the project. We propose a generalized log-interpretation procedure designed to guide hydrogeologists in the interpretation of geophysical logs, and in the integration of log data into ground-water models that may be systematically refined and improved in an iterative way. The procedure is designed to maximize the effective use of three primary contributions from geophysical logs: (1) The continuous depth scale of the measurements along the well bore; (2) The in situ measurement of lithologic properties and the correlation with hydraulic properties of the formations over a finite sample volume; and (3) Multiple independent measurements that can potentially be inverted for multiple physical or hydraulic properties of interest. The approach is formulated in the context of geophysical inversion theory, and is designed to be interfaced with surface geophysical soundings and conventional hydraulic testing. The step-by-step procedures given in our generalized interpretation and inversion technique are based on both qualitative analysis designed to assist formulation of the interpretation model, and quantitative analysis used to assign numerical values to model parameters. The approach bases a decision as to whether quantitative inversion is statistically warranted by formulating an over-determined inversion. If no such inversion is consistent with the inversion model, quantitative inversion is judged not possible with the given data set. Additional statistical criteria such as the statistical significance of regressions are used to guide the subsequent calibration of geophysical data in terms of hydraulic variables in those situations where quantitative data inversion is considered appropriate.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Bastani, Mehrdad; Donohue, Shane; Persson, Lena; Aspmo Pfaffhuber, Andreas; Reiser, Fabienne; Ren, Zhengyong
2013-05-01
In many coastal areas of North America and Scandinavia, post-glacial clay sediments have emerged above sea level due to iso-static uplift. These clays are often destabilised by fresh water leaching and transformed to so-called quick clays as at the investigated area at Smørgrav, Norway. Slight mechanical disturbances of these materials may trigger landslides. Since the leaching increases the electrical resistivity of quick clay as compared to normal marine clay, the application of electromagnetic (EM) methods is of particular interest in the study of quick clay structures. For the first time, single and joint inversions of direct-current resistivity (DCR), radiomagnetotelluric (RMT) and controlled-source audiomagnetotelluric (CSAMT) data were applied to delineate a zone of quick clay. The resulting 2-D models of electrical resistivity correlate excellently with previously published data from a ground conductivity metre and resistivity logs from two resistivity cone penetration tests (RCPT) into marine clay and quick clay. The RCPT log into the central part of the quick clay identifies the electrical resistivity of the quick clay structure to lie between 10 and 80 Ω m. In combination with the 2-D inversion models, it becomes possible to delineate the vertical and horizontal extent of the quick clay zone. As compared to the inversions of single data sets, the joint inversion model exhibits sharper resistivity contrasts and its resistivity values are more characteristic of the expected geology. In our preferred joint inversion model, there is a clear demarcation between dry soil, marine clay, quick clay and bedrock, which consists of alum shale and limestone.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts
NASA Astrophysics Data System (ADS)
Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon
2017-03-01
Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.
NASA Astrophysics Data System (ADS)
Deng, Chengxiang; Pan, Heping; Luo, Miao
2017-12-01
The Chinese Continental Scientific Drilling (CCSD) main hole is located in the Sulu ultrahigh-pressure metamorphic (UHPM) belt, providing significant opportunities for studying the metamorphic strata structure, kinetics process and tectonic evolution. Lithology identification is the primary and crucial stage for above geoscientific researches. To release the burden of log analyst and improve the efficiency of lithology interpretation, many algorithms have been developed to automate the process of lithology prediction. While traditional statistical techniques, such as discriminant analysis and K-nearest neighbors classifier, are incompetent in extracting nonlinear features of metamorphic rocks from complex geophysical log data; artificial intelligence algorithms are capable of solving nonlinear problems, but most of the algorithms suffer from tuning parameters to be global optimum to establish model rather than local optimum, and also encounter challenges in making the balance between training accuracy and generalization ability. Optimization methods have been applied extensively in the inversion of reservoir parameters of sedimentary formations using well logs. However, it is difficult to obtain accurate solution from the logging response equations of optimization method because of the strong overlapping of nonstationary log signals when applied in metamorphic formations. As oxide contents of each kinds of metamorphic rocks are relatively less overlapping, this study explores an approach, set in a metamorphic formation model and using the Broyden Fletcher Goldfarb Shanno (BFGS) optimization algorithm to identify lithology from oxide data. We first incorporate 11 geophysical logs and lab-collected geochemical data of 47 core samples to construct oxide profile of CCSD main hole by using backwards stepwise multiple regression method, which eliminates irrelevant input logs step by step for higher statistical significance and accuracy. Then we establish oxide response equations in accordance with the metamorphic formation model and employ BFGS algorithm to minimize the objective function. Finally, we identify lithology according to the composition content which accounts for the largest proportion. The results show that lithology identified by the method of this paper is consistent with core description. Moreover, this method demonstrates the benefits of using oxide content as an adhesive to connect logging data with lithology, can make the metamorphic formation model more understandable and accurate, and avoid selecting complex formation model and building nonlinear logging response equations.
NASA Astrophysics Data System (ADS)
Fehr, A.; Pechnig, R.; Inwood, J.; Lofi, J.; Bosch, F. P.; Clauser, C.
2012-04-01
The IODP drilling expedition 313 New Jersey Shallow Shelf was proposed for obtaining deep sub-seafloor samples and downhole logging measurements in the crucial inner shelf region. The inner to central shelf off-shore New Jersey is an ideal location for studying the history of sea-level changes and its relationship to sequence stratigraphy and onshore/offshore groundwater flows. The region features rapid depositional rates, tectonic stability, and well-preserved, cosmopolitan age control fossils suitable for characterizing the sediments of this margin throughout the time interval of interest. Past sea-level rise and fall is documented in sedimentary layers deposited during Earth's history. In addition, the inner shelf is characterised by relatively fresh pore water intervals alternating vertically with saltier intervals (Mountain et al., 2010). Therefore, three boreholes were drilled in the so-called New Jersey/Mid-Atlantic transect during IODP Expedition 313 New Jersey Shallow Shelf. Numerous questions have arisen concerning the age and origin of the brackish waters recovered offshore at depth. Here we present an analysis of thermophysical properties to be used as input parameters in constructing numerical models for future groundwater flow simulations. Our study is based mainly on Nuclear Magnetic Resonance (NMR) measurements for inferring porosity and permeability, and thermal conductivity. We performed NMR measurements on samples from boreholes M0027A, M0028A and M0029A and thermal conductivity measurements on the whole round cores prior to the Onshore Party. These results are compared with data from alternative laboratory measurements and with petrophysical properties inferred from downhole logging data. We deduced petrophysical properties from downhole logging data and compared them with results obtained with laboratory measurements. In water saturated samples, the number of spins in the fluid is proportional to sample porosity. NMR porosities were calculated from the zero amplitudes of the transverse relaxation measurements by normalizing the CPMG (Carr, Purcell, Meiboom, Gill) amplitudes of the measured samples to the amplitudes measured on a pure water cylinder which is equivalent to a porosity of 100 %. The NMR porosities fit well with porosities determined by Multi Sensor Core Logger (MSCL) and porosity measured on discrete samples using a helium gas pycnometer. Using log interpretation procedures, the volume fraction of different rock types and their porosity can be derived. From the volume fraction of each rock type and its porosity, continuous profiles of thermal conductivity can be derived by using a suitable mixing law, e.g. such as the geometric mean. In combination with thermal conductivity measurements on cores, these continuous thermal conductivity profiles can be calibrated, validated and finally used to provide reliable input parameter for numerical models. The porosity values from NMR seem to correlate well with porosities deduced from other measurements. In order to compare NMR permeabilities, we need permeability determined by an alternative method. The thermal conductivity derived from logs correlates with the measurements performed on cores. In a next step, a numerical model will be set up and the measured thermophysical properties will be implemented in order to study transport processes in passive continental margins. This numerical model will be based on existing geological models deduced from seismic data and drillings.
Performance of synchronous optical receivers using atmospheric compensation techniques.
Belmonte, Aniceto; Khan, Joseph
2008-09-01
We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.
NASA Technical Reports Server (NTRS)
Ido, Haisam; Burns, Rich
2015-01-01
The NASA Goddard Space Science Mission Operations project (SSMO) is performing a technical cost-benefit analysis for centralizing and consolidating operations of a diverse set of missions into a unified and integrated technical infrastructure. The presentation will focus on the notion of normalizing spacecraft operations processes, workflows, and tools. It will also show the processes of creating a standardized open architecture, creating common security models and implementations, interfaces, services, automations, notifications, alerts, logging, publish, subscribe and middleware capabilities. The presentation will also discuss how to leverage traditional capabilities, along with virtualization, cloud computing services, control groups and containers, and possibly Big Data concepts.
Ding, Feng; Yang, Xianhai; Chen, Guosong; Liu, Jining; Shi, Lili; Chen, Jingwen
2017-10-01
The partition coefficients between bovine serum albumin (BSA) and water (K BSA/w ) for ionogenic organic chemicals (IOCs) were different greatly from those of neutral organic chemicals (NOCs). For NOCs, several excellent models were developed to predict their logK BSA/w . However, it was found that the conventional descriptors are inappropriate for modeling logK BSA/w of IOCs. Thus, alternative approaches are urgently needed to develop predictive models for K BSA/w of IOCs. In this study, molecular descriptors that can be used to characterize the ionization effects (e.g. chemical form adjusted descriptors) were calculated and used to develop predictive models for logK BSA/w of IOCs. The models developed had high goodness-of-fit, robustness, and predictive ability. The predictor variables selected to construct the models included the chemical form adjusted averages of the negative potentials on the molecular surface (V s-adj - ), the chemical form adjusted molecular dipole moment (dipolemoment adj ), the logarithm of the n-octanol/water distribution coefficient (logD). As these molecular descriptors can be calculated from their molecular structures directly, the developed model can be easily used to fill the logK BSA/w data gap for other IOCs within the applicability domain. Furthermore, the chemical form adjusted descriptors calculated in this study also could be used to construct predictive models on other endpoints of IOCs. Copyright © 2017 Elsevier Inc. All rights reserved.
Geophysical evaluation of sandstone aquifers in the Reconcavo-Tucano Basin, Bahia -- Brazil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, O.A.L. de
1993-11-01
The upper clastic sediments in the Reconcavo-Tucano basin comprise a multilayer aquifer system of Jurassic age. Its groundwater is normally fresh down to depths of more than 1,000 m. Locally, however, there are zones producing high salinity or sulfur geothermal water. Analysis of electrical logs of more than 150 wells enabled the identification of the most typical sedimentary structures and the gross geometries for the sandstone units in selected areas of the basin. Based on this information, the thick sands are interpreted as coalescent point bars and the shales as flood plain deposits of a large fluvial environment. The resistivitymore » logs and core laboratory data are combined to develop empirical equations relating aquifer porosity and permeability to log-derived parameters such as formation factor and cementation exponent. Temperature logs of 15 wells were useful to quantify the water leakage through semiconfining shales. The groundwater quality was inferred from spontaneous potential (SP) log deflections under control of chemical analysis of water samples. An empirical chart is developed that relates the SP-derived water resistivity to the true water resistivity within the formations. The patterns of salinity variation with depth inferred from SP logs were helpful in identifying subsurface flows along major fault zones, where extensive mixing of water is taking place. A total of 49 vertical Schlumberger resistivity soundings aid in defining aquifer structures and in extrapolating the log derived results. Transition zones between fresh and saline waters have also been detected based on a combination of logging and surface sounding data. Ionic filtering by water leakage across regional shales, local convection and mixing along major faults and hydrodynamic dispersion away from lateral permeability contrasts are the main mechanisms controlling the observed distributions of salinity and temperature within the basin.« less
NASA Astrophysics Data System (ADS)
Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo
2016-12-01
We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.
NASA Astrophysics Data System (ADS)
Rhea, James R.; Young, Thomas C.
1987-10-01
The proton binding characteristics of humic acids extracted from the sediments of Cranberry Pond, an acidic water body located in the Adirondack Mountain region of New York State, were explored by the application of a multiligand distribution model. The model characterizes a class of proton binding sites by mean log K values and the standard deviations of log K values about the mean. Mean log K values and their relative abundances were determined directly from experimental titration data. The model accurately predicts the binding of protons by the humic acids for pH values in the range 3.5 to 10.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhea, J.R.; Young, T.C.
1987-01-01
The proton binding characteristics of humic acids extracted from the sediments of Cranberry Pond, an acidic water body located in the Adirondack Mountain region of New York State, were explored by the application of a nultiligand distribution model. The model characterizes a class of proton binding sites by mean log K values and the standard deviations of log K values and the mean. Mean log K values and their relative abundances were determined directly from experimental titration data. The model accurately predicts the binding of protons by the humic acids for pH values in the range 3.5 to 10.0.
Venous gas embolism after an open-water air dive and identical repetitive dive.
Schellart, N A M; Sterk, W
2012-01-01
Decompression tables indicate that a repetitive dive to the same depth as a first dive should be shortened to obtain the same probability of occurrence of decompression sickness (pDCS). Repetition protocols are based on small numbers, a reason for re-examination. Since venous gas embolism (VGE) and pDCS are related, one would expect a higher bubble grade (BG) of VGE after the repetitive dive without reducing bottom time. BGs were determined in 28 divers after a first and an identical repetitive air dive of 40 minutes to 20 meters of sea water. Doppler BG scores were transformed to log number of bubbles/cm2 (logB) to allow numerical analysis. With a previously published model (Model2), pDCS was calculated for the first dive and for both dives together. From pDCS, theoretical logBs were estimated with a pDCS-to-logB model constructed from literature data. However, pDCS the second dive was provided using conditional probability. This was achieved in Model2 and indirectly via tissue saturations. The combination of both models shows a significant increase of logB after the second dive, whereas the measurements showed an unexpected lower logB. These differences between measurements and model expectations are significant (p-values < 0.01). A reason for this discrepancy is uncertain. The most likely speculation would be that the divers, who were relatively old, did not perform physical activity for some days before the first dive. Our data suggest that, wisely, the first dive after a period of no exercise should be performed conservatively, particularly for older divers.
Morris C. Johnson; Jessica E. Halofsky; David L. Peterson
2013-01-01
We used a combination of field measurements and simulation modelling to quantify the effects of salvage logging, and a combination of salvage logging and pile-and-burn fuel surface fuel treatment (treatment combination), on fuel loadings, fire behaviour, fuel consumption and pollutant emissions at three points in time: post-windstorm (before salvage logging), post-...
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Estimating residual fault hitting rates by recapture sampling
NASA Technical Reports Server (NTRS)
Lee, Larry; Gupta, Rajan
1988-01-01
For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.
Savage, W.Z.; Morin, R.H.
2002-01-01
We have applied a previously developed analytical stress model to interpret subsurface stress conditions inferred from acoustic televiewer logs obtained in two municipal water wells located in a valley in the southern Davis Mountains near Alpine, Texas. The appearance of stress-induced breakouts with orientations that shift by 90?? at two different depths in one of the wells is explained by results from exact solutions for the effects of valleys on gravity and tectonically induced subsurface stresses. The theoretical results demonstrate that above a reference depth termed the hinge point, a location that is dependent on Poisson's ratio, valley shape, and magnitude of the maximum horizontal tectonic stress normal to the long axis of the valley, horizontal stresses parallel to the valley axis are greater than those normal to it. At depths below this hinge point the situation reverses and horizontal stresses normal to the valley axis are greater than those parallel to it. Application of the theoretical model at Alpine is accommodated by the fact that nearby earthquake focal mechanisms establish an extensional stress regime with the regional maximum horizontal principal stress aligned perpendicular to the valley axis. We conclude that the localized stress field associated with a valley setting can be highly variable and that breakouts need to be examined in this context when estimating the orientations and magnitudes of regional principal stresses.
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.
2002-01-01
Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagiwara, Teruhiko
1996-12-31
Induction log responses to layered, dipping, and anisotropic formations are examined analytically. The analytical model is especially helpful in understanding induction log responses to thinly laminated binary formations, such as sand/shale sequences, that exhibit macroscopically anisotropic: resistivity. Two applications of the analytical model are discussed. In one application we examine special induction log shoulder-bed corrections for use when thin anisotropic beds are encountered. It is known that thinly laminated sand/shale sequences act as macroscopically anisotropic: formations. Hydrocarbon-bearing formations also act as macroscopically anisotropic formations when they consist of alternating layers of different grain-size distributions. When such formations are thick, inductionmore » logs accurately read the macroscopic conductivity, from which the hydrocarbon saturation in the formations can be computed. When the laminated formations are not thick, proper shoulder-bed corrections (or thin-bed corrections) should be applied to obtain the true macroscopic formation conductivity and to estimate the hydrocarbon saturation more accurately. The analytical model is used to calculate the thin-bed effect and to evaluate the shoulder-bed corrections. We will show that the formation resistivity and hence the hydrocarbon saturation are greatly overestimated when the anisotropy effect is not accounted for and conventional shoulder-bed corrections are applied to the log responses from such laminated formations.« less
NASA Astrophysics Data System (ADS)
Nomura, M.; Ohsuga, K.
2017-03-01
In order to reveal the origin of the ultrafast outflows (UFOs) that are frequently observed in active galactic nuclei (AGNs), we perform two-dimensional radiation hydrodynamics simulations of the line-driven disc winds, which are accelerated by the radiation force due to the spectral lines. The line-driven winds are successfully launched for the range of MBH = 106-9 M⊙ and ε = 0.1-0.5, and the resulting mass outflow rate (dot{M_w}), momentum flux (dot{p_w}), and kinetic luminosity (dot{E_w}) are in the region containing 90 per cent of the posterior probability distribution in the dot{M}_w-Lbol plane, dot{p}_w-Lbol plane, and dot{E}_w-Lbol plane shown in Gofford et al., where MBH is the black hole mass, ε is the Eddington ratio, and Lbol is the bolometric luminosity. The best-fitting relations in Gofford et al., d log dot{M_w}/d log {L_bol}˜ 0.9, d log dot{p_w}/d log {L_bol}˜ 1.2, and d log dot{E_w}/d log {L_bol}˜ 1.5, are roughly consistent with our results, d log dot{M_w}/d log {L_bol}˜ 9/8, d log dot{p_w}/d log {L_bol}˜ 10/8, and d log dot{E_w}/d log {L_bol}˜ 11/8. In addition, our model predicts that no UFO features are detected for the AGNs with ε ≲ 0.01, since the winds do not appear. Also, only AGNs with MBH ≲ 108 M⊙ exhibit the UFOs when ε ∼ 0.025. These predictions nicely agree with the X-ray observations. These results support that the line-driven disc wind is the origin of the UFOs.
Spontaneous occurrence of a potentially night blinding disorder in guinea pigs.
Racine, Julie; Behn, Darren; Simard, Eric; Lachapelle, Pierre
2003-07-01
Several hereditary retinal disorders such as retinitis pigmentosa and congenital stationary night blindness compromise, sometimes exclusively, the activity of the rod pathway. Unfortunately, there are few animal models of these disorders that could help us better understand the pathophysiological processes involved. The purpose of this report is to present a pedigree of guinea pigs where, as a result of a consanguineous mating and subsequent selective breeding, we developed a new and naturally occurring animal model of a rod disorder. Analysis of the retinal function with the electroretinogram reveals that the threshold for rod-mediated electroretinograms (ERGs) is significantly increased by more than 2 log-units compared to that of normal guinea pigs. Furthermore, in response to a suprathreshold stimulus, also delivered under scotopic condition, which yield a mixed cone-rod response in normal guinea pigs, the ERG waveform in our mutant guinea pigs is almost identical (amplitude and timing of a- and b-waves) to that evoked in photopic condition. The above would thus suggest either a structural (abnormal development or absence) or a functional deficiency of the rod photoreceptors. We believe that our pedigree possibly represents a new animal model of a night blinding disorder, and that this condition is inherited as anautosomal recessive trait in the guinea pig population.
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.
Simple display system of mechanical properties of cells and their dispersion.
Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun
2012-01-01
The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others.