NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Box-Cox transformation of firm size data in statistical analysis
NASA Astrophysics Data System (ADS)
Chen, Ting Ting; Takaishi, Tetsuya
2014-03-01
Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.
PHAGE FORMATION IN STAPHYLOCOCCUS MUSCAE CULTURES
Price, Winston H.
1949-01-01
1. The total nucleic acid synthesized by normal and by infected S. muscae suspensions is approximately the same. This is true for either lag phase cells or log phase cells. 2. The amount of nucleic acid synthesized per cell in normal cultures increases during the lag period and remains fairly constant during log growth. 3. The amount of nucleic acid synthesized per cell by infected cells increases during the whole course of the infection. 4. Infected cells synthesize less RNA and more DNA than normal cells. The ratio of RNA/DNA is larger in lag phase cells than in log phase cells. 5. Normal cells release neither ribonucleic acid nor desoxyribonucleic acid into the medium. 6. Infected cells release both ribonucleic acid and desoxyribonucleic acid into the medium. The time and extent of release depend upon the physiological state of the cells. 7. Infected lag phase cells may or may not show an increased RNA content. They release RNA, but not DNA, into the medium well before observable cellular lysis and before any virus is liberated. At virus liberation, the cell RNA content falls to a value below that initially present, while DNA, which increased during infection falls to approximately the original value. 8. Infected log cells show a continuous loss of cell RNA and a loss of DNA a short time after infection. At the time of virus liberation the cell RNA value is well below that initially present and the cells begin to lyse. PMID:18139006
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Improvement of Reynolds-Stress and Triple-Product Lag Models
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Lillard, Randolph P.
2017-01-01
The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.
Assessment of the hygienic performances of hamburger patty production processes.
Gill, C O; Rahn, K; Sloan, K; McMullen, L M
1997-05-20
The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Variability of the Degassing Flux of 4He as an impact of 4He -Dating of Groundwaters
NASA Astrophysics Data System (ADS)
Torgersen, T.
2009-12-01
4He dating of groundwater is often confounded by an external flux of 4He as the result of a crustal degassing. Estimates of this external flux have been made but what is the impact on estimates of the 4He groundwater age? The existing measures of the 4He flux across the Earth’s solid surface have been evaluated collectively. The time-and-area weighted arithmetic mean (standard deviation) of n=33 4He degassing fluxes is 3.32(±0.45) x 1010 4He atoms m-2s-1. The log normal mean of 271 measures of the flux into Precambrian shield lakes of Canada is 4.57 x 1010atoms 4He m-2s-1 with a variance of */3.9x. The log normal mean of measurements (n=33) of the crustal flux is 3.63 x 1010 4He m-2s-1 with a best estimate one sigma log normal error of */36x based on an assumption of symmetric error bars. (For comparison, the log normal mean heat flow is 62.2 mW m-2 with a log normal variance of */1.8x; the best estimate mean is 65±1.6 Wm-2, Polach et al., 1993). The variance of the continental flux is shown to increase with decreasing time scales (*/ ~106x at 0.5yr) and decreasing space scales (*/ ~106x at 1km) suggesting that the mechanisms of crustal helium transport and degassing contain a high degree of spatial and temporal variability. This best estimate of the mean and variance in the flux of 4He from continents remains approximately equivalent to the radiogenic production rate of 4He in the whole crust. The small degree of variance in the Canadian lake data (n=271), Precambrian terrain, suggests that it may represent a best approximation of “steady state” crustal degassing. Large scale vertical mass transport in continental crust is estimated as scaled values to be of the order 10-5 cm2s-1 for helium (over 2Byr and 40km vertically) vs. 10-2 cm2s-1 for heat. The mass transport rate requires not only release of 4He from the solid phase via fracturing or comminution but also an enhanced rate of mass transport facilitated by some degree of fluid advection (as has been suggested by metamorphic geology) and further imply a separation of heat and mass during transport.
A Space-Saving Approximation Algorithm for Grammar-Based Compression
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroshi; Maruyama, Shirou; Kida, Takuya; Shimozono, Shinichi
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and g=\\\\Omega(\\\\log n) and g=O\\\\left(\\\\frac{n}{log_kn}\\\\right) for strings from k-letter alphabet[12].
Log-amplitude statistics for Beck-Cohen superstatistics
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Konno, Hidetoshi
2013-05-01
As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
The missing impact craters on Venus
NASA Technical Reports Server (NTRS)
Speidel, D. H.
1993-01-01
The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurtubise, R.J.; Hussain, A.; Silver, H.F.
1981-11-01
The normal-phase liquid chromatographic models of Scott, Snyder, and Soczewinski were considered for a ..mu..-Bondapak NH/sub 2/ stationary phase. n-Heptane:2-propanol and n-heptane:ethyl acetate mobile phases of different compositions were used. Linear relationships were obtained from graphs of log K' vs. log mole fraction of the strong solvent for both n-heptane:2-propanol and n-heptane:ethyl acetate mobile phases. A linear relationship was obtained between the reciprocal of corrected retention volume and % wt/v of 2-propanol but not between the reciprocal of corrected retention volume and % wt/v of ethyl acetate. The slopes and intercept terms from the Snyder and Soczewinski models were foundmore » to approximately describe interactions with ..mu..-Bondapak NH/sub 2/. Capacity factors can be predicted for the compounds by using the equations obtained from mobile phase composition variation experiments.« less
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity
McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.
2011-01-01
Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Evaluation of waste mushroom logs as a potential biomass resource for the production of bioethanol.
Lee, Jae-Won; Koo, Bon-Wook; Choi, Joon-Weon; Choi, Don-Ha; Choi, In-Gyu
2008-05-01
In order to investigate the possibility of using waste mushroom logs as a biomass resource for alternative energy production, the chemical and physical characteristics of normal wood and waste mushroom logs were examined. Size reduction of normal wood (145 kW h/tone) required significantly higher energy consumption than waste mushroom logs (70 kW h/tone). The crystallinity value of waste mushroom logs was dramatically lower (33%) than normal wood (49%) after cultivation by Lentinus edodes as spawn. Lignin, an enzymatic hydrolysis inhibitor in sugar production, decreased from 21.07% to 18.78% after inoculation of L. edodes. Total sugar yields obtained by enzyme and acid hydrolysis were higher in waste mushroom logs than in normal wood. After 24h fermentation, 12 g/L ethanol was produced on waste mushroom logs, while normal wood produced 8 g/L ethanol. These results indicate that waste mushroom logs are economically suitable lignocellulosic material for the production of fermentable sugars related to bioethanol production.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Hickman, Stephen; Barton, Colleen; Zoback, Mark; Morin, Roger; Sass, John; Benoit, Richard; ,
1997-01-01
As part of a study relating fractured rock hydrology to in-situ stress and recent deformation within the Dixie Valley Geothermal Field, borehole televiewer logging and hydraulic fracturing stress measurements were conducted in a 2.7-km-deep geothermal production well (73B-7) drilled into the Stillwater fault zone. Borehole televiewer logs from well 73B-7 show numerous drilling-induced tensile fractures, indicating that the direction of the minimum horizontal principal stress, Shmin, is S57 ??E. As the Stillwater fault at this location dips S50 ??E at approximately 3??, it is nearly at the optimal orientation for normal faulting in the current stress field. Analysis of the hydraulic fracturing data shows that the magnitude of Shmin is 24.1 and 25.9 MPa at 1.7 and 2.5 km, respectively. In addition, analysis of a hydraulic fracturing test from a shallow well 1.5 km northeast of 73B-7 indicates that the magnitude of Shmin is 5.6 MPa at 0.4 km depth. Coulomb failure analysis shows that the magnitude of Shmin in these wells is close to that predicted for incipient normal faulting on the Stillwater and subparallel faults, using coefficients of friction of 0.6-1.0 and estimates of the in-situ fluid pressure and overburden stress. Spinner flowmeter and temperature logs were also acquired in well 73B-7 and were used to identify hydraulically conductive fractures. Comparison of these stress and hydrologic data with fracture orientations from the televiewer log indicates that hydraulically conductive fractures within and adjacent to the Stillwater fault zone are critically stressed, potentially active normal faults in the current west-northwest extensional stress regime at Dixie Valley.
Speed, spatial, and temporal tuning of rod and cone vision in mouse.
Umino, Yumiko; Solessio, Eduardo; Barlow, Robert B
2008-01-02
Rods and cones subserve mouse vision over a 100 million-fold range of light intensity (-6 to 2 log cd m(-2)). Rod pathways tune vision to the temporal frequency of stimuli (peak, 0.75 Hz) and cone pathways to their speed (peak, approximately 12 degrees/s). Both pathways tune vision to the spatial components of stimuli (0.064-0.128 cycles/degree). The specific photoreceptor contributions were determined by two-alternative, forced-choice measures of contrast thresholds for optomotor responses of C57BL/6J mice with normal vision, Gnat2(cpfl3) mice without functional cones, and Gnat1-/- mice without functional rods. Gnat2(cpfl3) mice (threshold, -6.0 log cd m(-2)) cannot see rotating gratings above -2.0 log cd m(-2) (photopic vision), and Gnat1-/- mice (threshold, -4.0 log cd m(-2)) are blind below -4.0 log cd m(-2) (scotopic vision). Both genotypes can see in the transitional mesopic range (-4.0 to -2.0 log cd m(-2)). Mouse rod and cone sensitivities are similar to those of human. This parametric study characterizes the functional properties of the mouse visual system, revealing the rod and cone contributions to contrast sensitivity and to the temporal processing of visual stimuli.
Doyi, Israel; Essumang, David Kofi; Dampare, Samuel; Glover, Eric Tetteh
Radiation is part of the natural environment: it is estimated that approximately 80 % of all human exposure comes from naturally occurring or background radiation. Certain extractive industries such as mining and oil logging have the potential to increase the risk of radiation exposure to the environment and humans by concentrating the quantities of naturally occurring radiation beyond normal background levels (Azeri-Chirag-Gunashli 2004).
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Predicting financial market crashes using ghost singularities.
Smug, Damian; Ashwin, Peter; Sornette, Didier
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of 'ghosts of finite-time singularities' is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts.
Predicting financial market crashes using ghost singularities
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of ‘ghosts of finite-time singularities’ is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts. PMID:29596485
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Application of ozonated dry ice (ALIGAL™ Blue Ice) for packaging and transport in the food industry.
Fratamico, Pina M; Juneja, Vijay; Annous, Bassam A; Rasanayagam, Vasuhi; Sundar, M; Braithwaite, David; Fisher, Steven
2012-05-01
Dry ice is used by meat and poultry processors for temperature reduction during processing and for temperature maintenance during transportation. ALIGAL™ Blue Ice (ABI), which combines the antimicrobial effect of ozone (O(3)) along with the high cooling capacity of dry ice, was investigated for its effect on bacterial reduction in air, in liquid, and on food and glass surfaces. Through proprietary means, O(3) was introduced to produce dry ice pellets to a concentration of 20 parts per million (ppm) by total weight. The ABI sublimation rate was similar to that of dry ice pellets under identical conditions, and ABI was able to hold the O(3) concentration throughout the normal shelf life of the product. Challenge studies were performed using different microorganisms, including E. coli, Campylobacter jejuni, Salmonella, and Listeria, that are critical to food safety. ABI showed significant (P < 0.05) microbial reduction during bioaerosol contamination (up to 5-log reduction of E. coli and Listeria), on chicken breast (approximately 1.3-log reduction of C. jejuni), on contact surfaces (approximately 3.9 log reduction of C. jejuni), and in liquid (2-log reduction of C. jejuni). Considering the stability of O(3), ease of use, and antimicrobial efficacy against foodborne pathogens, our results suggest that ABI is a better alternative, especially for meat and poultry processors, as compared to dry ice. Further, ABI can potentially serve as an additional processing hurdle to guard against pathogens during processing, transportation, distribution, and/or storage. © 2012 Institute of Food Technologists®
Collective purchase behavior toward retail price changes
NASA Astrophysics Data System (ADS)
Ueno, Hiromichi; Watanabe, Tsutomu; Takayasu, Hideki; Takayasu, Misako
2011-02-01
By analyzing a huge amount of point-of-sale data collected from Japanese supermarkets, we find power law relationships between price and sales numbers. The estimated values of the exponents of these power laws depend on the category of products; however, they are independent of the stores, thereby implying the existence of universal human purchase behavior. The rate of sales numbers around these power laws are generally approximated by log-normal distributions implying that there are hidden random parameters, which might proportionally affect the purchase activity.
Proton Straggling in Thick Silicon Detectors
NASA Technical Reports Server (NTRS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.
2017-01-01
Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Simulations of large acoustic scintillations in the straits of Florida.
Tang, Xin; Tappert, F D; Creamer, Dennis B
2006-12-01
Using a full-wave acoustic model, Monte Carlo numerical studies of intensity fluctuations in a realistic shallow water environment that simulates the Straits of Florida, including internal wave fluctuations and bottom roughness, have been performed. Results show that the sound intensity at distant receivers scintillates dramatically. The acoustic scintillation index SI increases rapidly with propagation range and is significantly greater than unity at ranges beyond about 10 km. This result supports a theoretical prediction by one of the authors. Statistical analyses show that the distribution of intensity of the random wave field saturates to the expected Rayleigh distribution with SI= 1 at short range due to multipath interference effects, and then SI continues to increase to large values. This effect, which is denoted supersaturation, is universal at long ranges in waveguides having lossy boundaries (where there is differential mode attenuation). The intensity distribution approaches a log-normal distribution to an excellent approximation; it may not be a universal distribution and comparison is also made to a K distribution. The long tails of the log-normal distribution cause "acoustic intermittency" in which very high, but rare, intensities occur.
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Miller, Cortney; Heringa, Spencer; Kim, Jinkyung; Jiang, Xiuping
2013-06-01
This study analyzed various organic fertilizers for indicator microorganisms, pathogens, and antibiotic-resistant Escherichia coli, and evaluated the growth potential of E. coli O157:H7 and Salmonella in fertilizers. A microbiological survey was conducted on 103 organic fertilizers from across the United States. Moisture content ranged from approximately 1% to 86.4%, and the average pH was 7.77. The total aerobic mesophiles ranged from approximately 3 to 9 log colony-forming units (CFU)/g. Enterobacteriaceae populations were in the range of <1 to approximately 7 log CFU/g, while coliform levels varied from <1 to approximately 6 log CFU/g. Thirty samples (29%) were positive for E. coli, with levels reaching approximately 6 log CFU/g. There were no confirmed positives for E. coli O157:H7, Salmonella, or Listeria monocytogenes. The majority of E. coli isolates (n=73), confirmed by glutamate decarboxylase (gad) PCR, were from group B1 (48%) and group A (32%). Resistance to 16 antibiotics was examined for 73 E. coli isolates, with 11 isolates having resistance to at least one antibiotic, 5 isolates to ≥ 2 antibiotics, and 2 isolates to ≥ 10 antibiotics. In the presence of high levels of background aerobic mesophiles, Salmonella and E. coli O157:H7 grew approximately 1 log CFU/g within 1 day of incubation in plant-based compost and fish emulsion-based compost, respectively. With low levels of background aerobic mesophiles, Salmonella grew approximately 2.6, 3.0, 3.0, and 3.2 log CFU/g in blood, bone, and feather meals and the mixed-source fertilizer, respectively, whereas E. coli O157:H7 grew approximately 4.6, 4.0, 4.0, and 4.8 log CFU/g, respectively. Our results revealed that the microbiological quality of organic fertilizers varies greatly, with some fertilizers containing antibiotic resistant E. coli and a few supporting the growth of foodborne pathogens after reintroduction into the fertilizer.
Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
NASA Astrophysics Data System (ADS)
El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.
2018-06-01
An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.
Dekkers, A L M; Slob, W
2012-10-01
In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks
NASA Astrophysics Data System (ADS)
Murray, Steven G.
2018-05-01
powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2015-05-01
Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Rupert, C.P.; Miller, C.T.
2008-01-01
We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519
Normal reference values for bladder wall thickness on CT in a healthy population.
Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T
2018-02-01
To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2 = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Fechner's law: where does the log transform come from?
Laming, Donald
2010-01-01
This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.
Veneer-log production and receipts in the Southeast, 1988
Cecil C. Hutchins
1990-01-01
In 1988, almost 1.4 billion board feet of veneer logs were harvested in the Southeast, and the region's veneer mills processed approximately 1.5 billion board feet of logs. Almost 78 percent of veneer-log production and 76 percent of veneer-log receipts were softwood. There were 79 veneer mills operating in 1988. Softwood plywood was the major product. Almost all...
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
Venting test analysis using Jacob`s approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, K.B.
1996-03-01
There are many sites contaminated by volatile organic compounds (VOCs) in the US and worldwide. Several technologies are available for remediation of these sites, including excavation, pump and treat, biological treatment, air sparging, steam injection, bioventing, and soil vapor extraction (SVE). SVE is also known as soil venting or vacuum extraction. Field venting tests were conducted in alluvial sands residing between the water table and a clay layer. Flow rate, barometric pressure, and well-pressure data were recorded using pressure transmitters and a personal computer. Data were logged as frequently as every second during periods of rapid change in pressure. Testsmore » were conducted at various extraction rates. The data from several tests were analyzed concurrently by normalizing the well pressures with respect to extraction rate. The normalized pressures vary logarithmically with time and fall on one line allowing a single match of the Jacob approximation to all tests. Though the Jacob approximation was originally developed for hydraulic pump test analysis, it is now commonly used for venting test analysis. Only recently, however, has it been used to analyze several transient tests simultaneously. For the field venting tests conducted in the alluvial sands, the air permeability and effective porosity determined from the concurrent analysis are 8.2 {times} 10{sup {minus}7} cm{sup 2} and 20%, respectively.« less
Minimax rational approximation of the Fermi-Dirac distribution.
Moussa, Jonathan E
2016-10-28
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ -1 )) poles to achieve an error tolerance ϵ at temperature β -1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ , the occupied energy interval. This is particularly beneficial when Δ ≫ Δ occ , such as in electronic structure calculations that use a large basis set.
Minimax rational approximation of the Fermi-Dirac distribution
NASA Astrophysics Data System (ADS)
Moussa, Jonathan E.
2016-10-01
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.
Rodrigo Pereira Jr.; Johan Zweedea; Gregory P. Asnerb; Keller; Michael
2002-01-01
We investigated ground and canopy damage and recovery following conventional logging and reduced-impact logging (RIL) of moist tropical forest in the eastern Amazon of Brazil. Paired conventional and RIL blocks were selectively logged with a harvest intensity of approximately 23 m3 ha
Mesner, Larry D.; Valsakumar, Veena; Karnani, Neerja; Dutta, Anindya; Hamlin, Joyce L.; Bekiranov, Stefan
2011-01-01
We have used a novel bubble-trapping procedure to construct nearly pure and comprehensive human origin libraries from early S- and log-phase HeLa cells, and from log-phase GM06990, a karyotypically normal lymphoblastoid cell line. When hybridized to ENCODE tiling arrays, these libraries illuminated 15.3%, 16.4%, and 21.8% of the genome in the ENCODE regions, respectively. Approximately half of the origin fragments cluster into zones, and their signals are generally higher than those of isolated fragments. Interestingly, initiation events are distributed about equally between genic and intergenic template sequences. While only 13.2% and 14.0% of genes within the ENCODE regions are actually transcribed in HeLa and GM06990 cells, 54.5% and 25.6% of zonal origin fragments overlap transcribed genes, most with activating chromatin marks in their promoters. Our data suggest that cell synchronization activates a significant number of inchoate origins. In addition, HeLa and GM06990 cells activate remarkably different origin populations. Finally, there is only moderate concordance between the log-phase HeLa bubble map and published maps of small nascent strands for this cell line. PMID:21173031
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.
Fuls, Janice L; Rodgers, Nancy D; Fischler, George E; Howard, Jeanne M; Patel, Monica; Weidner, Patrick L; Duran, Melani H
2008-06-01
Antimicrobial hand soaps provide a greater bacterial reduction than nonantimicrobial soaps. However, the link between greater bacterial reduction and a reduction of disease has not been definitively demonstrated. Confounding factors, such as compliance, soap volume, and wash time, may all influence the outcomes of studies. The aim of this work was to examine the effects of wash time and soap volume on the relative activities and the subsequent transfer of bacteria to inanimate objects for antimicrobial and nonantimicrobial soaps. Increasing the wash time from 15 to 30 seconds increased reduction of Shigella flexneri from 2.90 to 3.33 log(10) counts (P = 0.086) for the antimicrobial soap, while nonantimicrobial soap achieved reductions of 1.72 and 1.67 log(10) counts (P > 0.6). Increasing soap volume increased bacterial reductions for both the antimicrobial and the nonantimicrobial soaps. When the soap volume was normalized based on weight (approximately 3 g), nonantimicrobial soap reduced Serratia marcescens by 1.08 log(10) counts, compared to the 3.83-log(10) reduction caused by the antimicrobial soap (P < 0.001). The transfer of Escherichia coli to plastic balls following a 15-second hand wash with antimicrobial soap resulted in a bacterial recovery of 2.49 log(10) counts, compared to the 4.22-log(10) (P < 0.001) bacterial recovery on balls handled by hands washed with nonantimicrobial soap. This indicates that nonantimicrobial soap was less active and that the effectiveness of antimicrobial soaps can be improved with longer wash time and greater soap volume. The transfer of bacteria to objects was significantly reduced due to greater reduction in bacteria following the use of antimicrobial soap.
NASA Astrophysics Data System (ADS)
Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea
2016-10-01
We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.
Zhou, Wen; Wang, Guifen; Li, Cai; Xu, Zhantang; Cao, Wenxi; Shen, Fang
2017-10-20
Phytoplankton cell size is an important property that affects diverse ecological and biogeochemical processes, and analysis of the absorption and scattering spectra of phytoplankton can provide important information about phytoplankton size. In this study, an inversion method for extracting quantitative phytoplankton cell size data from these spectra was developed. This inversion method requires two inputs: chlorophyll a specific absorption and scattering spectra of phytoplankton. The average equivalent-volume spherical diameter (ESD v ) was calculated as the single size approximation for the log-normal particle size distribution (PSD) of the algal suspension. The performance of this method for retrieving cell size was assessed using the datasets from cultures of 12 phytoplankton species. The estimations of a(λ) and b(λ) for the phytoplankton population using ESD v had mean error values of 5.8%-6.9% and 7.0%-10.6%, respectively, compared to the a(λ) and b(λ) for the phytoplankton populations using the log-normal PSD. The estimated values of C i ESD v were in good agreement with the measurements, with r 2 =0.88 and relative root mean square error (NRMSE)=25.3%, and relatively good performances were also found for the retrieval of ESD v with r 2 =0.78 and NRMSE=23.9%.
NASA Astrophysics Data System (ADS)
Jelinek, Herbert F.; Pham, Phuong; Struzik, Zbigniew R.; Spence, Ian
2007-07-01
Diabetes mellitus (DM) is a serious and increasing health problem worldwide. Compared to non-diabetics, patients experience an increased risk of all cardiovascular diseases, including dysfunctional neural control of the heart. Poor diagnoses of cardiac autonomic neuropathy (CAN) may result in increased incidence of silent myocardial infarction and ischaemia, which can lead to sudden death. Traditionally the Ewing battery of tests is used to identify CAN. The purpose of this study is to examine the usefulness of heart rate variability (HRV) analyses of short-term ECG recordings as a method for detecting CAN. HRV may be able to identify asymptomatic individuals, which the Ewing battery is not able to do. Several HRV parameters are assessed, including time and frequency domain, as well as nonlinear parameters. Eighteen out of thirty-eight individuals with diabetes were positive for two or more of the Ewing battery of tests indicating CAN. Approximate Entropy (ApEn), log normalized total power (LnTP) and log normalized high frequency (LnHF) power demonstrate a significant difference at p < 0.05 between CAN+ and CAN-. This indicates that nonlinear scaling parameters are able to identify people with cardiac autonomic neuropathy in short ECG recordings. Our study paves the way to assess the utility of nonlinear parameters in identifying asymptomatic CAN.
R/S analysis of reaction time in Neuron Type Test for human activity in civil aviation
NASA Astrophysics Data System (ADS)
Zhang, Hong-Yan; Kang, Ming-Cui; Li, Jing-Qiang; Liu, Hai-Tao
2017-03-01
Human factors become the most serious problem leading to accidents of civil aviation, which stimulates the design and analysis of Neuron Type Test (NTT) system to explore the intrinsic properties and patterns behind the behaviors of professionals and students in civil aviation. In the experiment, normal practitioners' reaction time sequences, collected from NTT, exhibit log-normal distribution approximately. We apply the χ2 test to compute the goodness-of-fit by transforming the time sequence with Box-Cox transformation to cluster practitioners. The long-term correlation of different individual practitioner's time sequence is represented by the Hurst exponent via Rescaled Range Analysis, also named by Range/Standard deviation (R/S) Analysis. The different Hurst exponent suggests the existence of different collective behavior and different intrinsic patterns of human factors in civil aviation.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
Log-Concavity and Strong Log-Concavity: a review
Saumard, Adrien; Wellner, Jon A.
2016-01-01
We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on ℝ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning. PMID:27134693
Minimax rational approximation of the Fermi-Dirac distribution
Moussa, Jonathan E.
2016-10-27
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ –1)) poles to achieve an error tolerance ϵ at temperature β –1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δ occ, such as in electronic structure calculations that use a large basis set.
NASA Technical Reports Server (NTRS)
Bigger, J. T. Jr; Steinman, R. C.; Rolnitzky, L. M.; Fleiss, J. L.; Albrecht, P.; Cohen, R. J.
1996-01-01
BACKGROUND. The purposes of the present study were (1) to establish normal values for the regression of log(power) on log(frequency) for, RR-interval fluctuations in healthy middle-aged persons, (2) to determine the effects of myocardial infarction on the regression of log(power) on log(frequency), (3) to determine the effect of cardiac denervation on the regression of log(power) on log(frequency), and (4) to assess the ability of power law regression parameters to predict death after myocardial infarction. METHODS AND RESULTS. We studied three groups: (1) 715 patients with recent myocardial infarction; (2) 274 healthy persons age and sex matched to the infarct sample; and (3) 19 patients with heart transplants. Twenty-four-hour RR-interval power spectra were computed using fast Fourier transforms and log(power) was regressed on log(frequency) between 10(-4) and 10(-2) Hz. There was a power law relation between log(power) and log(frequency). That is, the function described a descending straight line that had a slope of approximately -1 in healthy subjects. For the myocardial infarction group, the regression line for log(power) on log(frequency) was shifted downward and had a steeper negative slope (-1.15). The transplant (denervated) group showed a larger downward shift in the regression line and a much steeper negative slope (-2.08). The correlation between traditional power spectral bands and slope was weak, and that with log(power) at 10(-4) Hz was only moderate. Slope and log(power) at 10(-4) Hz were used to predict mortality and were compared with the predictive value of traditional power spectral bands. Slope and log(power) at 10(-4) Hz were excellent predictors of all-cause mortality or arrhythmic death. To optimize the prediction of death, we calculated a log(power) intercept that was uncorrelated with the slope of the power law regression line. We found that the combination of slope and zero-correlation log(power) was an outstanding predictor, with a relative risk of > 10, and was better than any combination of the traditional power spectral bands. The combination of slope and log(power) at 10(-4) Hz also was an excellent predictor of death after myocardial infarction. CONCLUSIONS. Myocardial infarction or denervation of the heart causes a steeper slope and decreased height of the power law regression relation between log(power) and log(frequency) of RR-interval fluctuations. Individually and, especially, combined, the power law regression parameters are excellent predictors of death of any cause or arrhythmic death and predict these outcomes better than the traditional power spectral bands.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Spectral Density of Laser Beam Scintillation in Wind Turbulence. Part 1; Theory
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1997-01-01
The temporal spectral density of the log-amplitude scintillation of a laser beam wave due to a spatially dependent vector-valued crosswind (deterministic as well as random) is evaluated. The path weighting functions for normalized spectral moments are derived, and offer a potential new technique for estimating the wind velocity profile. The Tatarskii-Klyatskin stochastic propagation equation for the Markov turbulence model is used with the solution approximated by the Rytov method. The Taylor 'frozen-in' hypothesis is assumed for the dependence of the refractive index on the wind velocity, and the Kolmogorov spectral density is used for the refractive index field.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
Measuring Resistance to Change at the Within-Session Level
ERIC Educational Resources Information Center
Tonneau, Francois; Rios, Americo; Cabrera, Felipe
2006-01-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases…
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
Empirical analysis on the runners' velocity distribution in city marathons
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Meng, Fan
2018-01-01
In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.
Far-infrared properties of cluster galaxies
NASA Technical Reports Server (NTRS)
Bicay, M. D.; Giovanelli, R.
1987-01-01
Far-infrared properties are derived for a sample of over 200 galaxies in seven clusters: A262, Cancer, A1367, A1656 (Coma), A2147, A2151 (Hercules), and Pegasus. The IR-selected sample consists almost entirely of IR normal galaxies, with Log of L(FIR) = 9.79 solar luminosities, Log of L(FIR)/L(B) = 0,79, and Log of S(100 microns)/S(60 microns) = 0.42. None of the sample galaxies has Log of L(FIR) greater than 11.0 solar luminosities, and only one has a FIR-to-blue luminosity ratio greater than 10. No significant differences are found in the FIR properties of HI-deficient and HI-normal cluster galaxies.
Statistical characterization of thermal plumes in turbulent thermal convection
NASA Astrophysics Data System (ADS)
Zhou, Sheng-Qi; Xie, Yi-Chao; Sun, Chao; Xia, Ke-Qing
2016-09-01
We report an experimental study on the statistical properties of the thermal plumes in turbulent thermal convection. A method has been proposed to extract the basic characteristics of thermal plumes from temporal temperature measurement inside the convection cell. It has been found that both plume amplitude A and cap width w , in a time domain, are approximately in the log-normal distribution. In particular, the normalized most probable front width is found to be a characteristic scale of thermal plumes, which is much larger than the thermal boundary layer thickness. Over a wide range of the Rayleigh number, the statistical characterizations of the thermal fluctuations of plumes, and the turbulent background, the plume front width and plume spacing have been discussed and compared with the theoretical predictions and morphological observations. For the most part good agreements have been found with the direct observations.
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
On Nash-Equilibria of Approximation-Stable Games
NASA Astrophysics Data System (ADS)
Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh
One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.
Effect of curve sawing on lumber recovery and warp of short cherry logs containing sweep
Brian H. Bond; Philip Araman
2008-01-01
It has been estimated that approximately one-third of hardwood sawlogs have a significant amount of sweep and that 7 to nearly 40 percent of the yield is lost from logs that have greater than 1 inch of sweep. While decreased yield is important, for hardwood logs the loss of lumber value is likely more significant. A method that produced lumber while accounting for log...
Nava-Ocampo, Alejandro A; Bello-Ramírez, Angélica M
2004-01-01
1. Drugs administered into the epidural space by caudal block are cleared by means of a process potentially affected by the lipophilic character of the compounds. 2. In the present study, we examined the relationship between the octanol-water partition coefficient (log Poct) and the time to reach the maximum plasma drug concentration (tmax) of lignocaine, bupivacaine and ropivacaine administered by caudal block in paediatric patients. We also examined the relationship between log Poct and the toxicity of these local anaesthetic agents in experimental models. The tmax and toxicity data were obtained from the literature. 3. Ropivacaine, with a log Poct of 2.9, exhibited a tmax of 61.6 min. The tmax of lignocaine, with a log Poct of 2.4, and bupivacaine, with a log Poct of with 3.4, were approximately 50% shorter than ropivacaine. At log Poct of approximately 3.0, the toxicity of these local anaesthetic agents was substantially increased. The relationship between log Poct and the convulsive effect in dogs was similar to the relationship between log Poct and the lethal dose in sheep. 4. With local anaesthetic agents, it appears that the relationship between log Poct and drug transfer from the epidural space to the blood stream is parabolic, being the slowest rate of transference at log Poct 3.0. Toxicity, due to plasma availability of these local anaesthetic agents, seems to be increased at log Poct equal or higher than 3.0 secondary to the highest transfer from plasma into the central nervous system.
Veneer recovery from Douglas-fir logs.
E.H. Clarke; A.C. Knauss
1957-01-01
During 1956, the Pacific Northwest Forest and Range Experiment Station made a series of six veneer-recovery studies in the Douglas-fir region of Oregon and Washington. The net volume of logs involved totaled approximately 777 M board-feet. Purpose of these studies was to determine volume recovery, by grade of veneer, from the four principal grades of Douglas-fir logs...
Assessing the Utility of a Daily Log for Measuring Principal Leadership Practice
ERIC Educational Resources Information Center
Camburn, Eric M.; Spillane, James P.; Sebastian, James
2010-01-01
Purpose: This study examines the feasibility and utility of a daily log for measuring principal leadership practice. Setting and Sample: The study was conducted in an urban district with approximately 50 principals. Approach: The log was assessed against two criteria: (a) Is it feasible to induce strong cooperation and high response rates among…
Logging truck noise near nesting northern goshawks
Teryl G. Grubb; Larry L. Pater; David K. Delaney
1998-01-01
We measured noise levels of four logging trucks as the trucks passed within approximately 500 m of two active northern goshawk (Accipiter gentilis) nests on the Kaibab Plateau in northern Arizona in 1997. Neither a brooding adult female nor a lone juvenile exhibited any discernable behavioral response to logging truck noise, which peaked at 53.4 and...
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less
Hardwood Veneer Timber Volume In Upper Michigan
E.W. Fobes; Gary R. Lindell
1969-01-01
Forests in Upper Michigan contain approximately 1.5 billion board feet of veneer logs of which three-fourths is hard maple and yellow birch. About 14 percent of the hardwood sawtimber is suitable for veneer logs.
Minimum nonuniform graph partitioning with unrelated weights
NASA Astrophysics Data System (ADS)
Makarychev, K. S.; Makarychev, Yu S.
2017-12-01
We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.
The yield of Douglas-fir in the Pacific Northwest measured by international 1/4-inch kerf log rule.
Philip A. Briegleb
1948-01-01
The international log rule is little used in the Douglas-fir region today, but it is likely to find wider use here in the future. This is the opinion of a number of foresters preparing plans for the management of forest properties in the region. Advantage of the International 1/4-inch kerf log rule is that log scale by this measure approximates the volume of green...
Frequency distribution of lithium in leaves of Lycium andersonii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romney, E.M.; Wallace, A.; Kinnear, J.
1977-01-01
Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less
Role of photoacoustics in optogalvanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, D.; McGlynn, S.P.
1990-09-15
Time-resolved laser optogalvanic (LOG) signals have been induced by pulsed laser excitation (l{ital s}{sub {ital j}}{r arrow}2{ital p}{sub {ital k}}, Paschen notation) of a {approximately}30 MHz radio-frequency (rf) discharge in neon at {approximately}5 torr. Dramatic changes of the shape/polarity of certain parts of the LOG signals occur when the rf excitation frequency is scanned over the electrical resonance peak of the plasma and the associated driving/detecting circuits. These effects are attributed to ionization rate changes (i.e., laser-induced alterations of the plasma conductivity), with concomitant variations in the plasma resonance characteristics. In addition to ionization rate changes, it is shown thatmore » photoacoustic (PA) effects also play a significant role in the generation of the LOG signal. Those parts of the LOG signal that are invariant with respect to the rf frequency are attributed to a PA effect. The similarity of LOG signal shapes from both rf and dc discharges suggests that photoacoustics play a similar role in the LOG effect in dc discharges. Contrary to common belief, most reported LOG signal profiles, ones produced by excitation to levels that do not lie close to the ionization threshold, appear to be totally mediated by the PA effect.« less
LBA-ECO TG-07 Trace Gas Fluxes, Undisturbed and Logged Sites, Para, Brazil: 2000-2002
M.M. Keller; R.K. Varner; J.D. Dias; H.S. Silva; P.M. Crill; Jr. de Oliveira; G.P. Asner
2009-01-01
Trace gas fluxes of carbon dioxide, methane, nitrous oxide, and nitric oxide were measured manually at undisturbed and logged forest sites in the Tapajos National Forest, near Santarem, Para, Brazil. Manual measurements were made approximately weekly at both the undisturbed and logged sites. Fluxes from clay and sand soils were completed at the undisturbed sites....
A new look at the Lake Superior biomass size spectrum
Yurista, Peder M.; Yule, Daniel L.; Balge, Matt; VanAlstine, Jon D.; Thompson, Jo A.; Gamble, Allison E.; Hrabik, Thomas R.; Kelly, John R.; Stockwell, Jason D.; Vinson, Mark
2014-01-01
We synthesized data from multiple sampling programs and years to describe the Lake Superior pelagic biomass size structure. Data consisted of Coulter counts for phytoplankton, optical plankton counts for zooplankton, and acoustic surveys for pelagic prey fish. The size spectrum was stable across two time periods separated by 5 years. The primary scaling or overall slope of the normalized biomass size spectra for the combined years was −1.113, consistent with a previous estimate for Lake Superior (−1.10). Periodic dome structures within the overall biomass size structure were fit to polynomial regressions based on the observed sub-domes within the classical taxonomic positions (algae, zooplankton, and fish). This interpretation of periodic dome delineation was aligned more closely with predator–prey size relationships that exist within the zooplankton (herbivorous, predacious) and fish (planktivorous, piscivorous) taxonomic positions. Domes were spaced approximately every 3.78 log10 units along the axis and with a decreasing peak magnitude of −4.1 log10 units. The relative position of the algal and herbivorous zooplankton domes predicted well the subsequent biomass domes for larger predatory zooplankton and planktivorous prey fish.
1993-06-01
1 A. OBJECTIVES ............. .... .................. 1 B. HISTORY ................... .................... 2 C...utilization, and any additional manpower requirements at the "selected" AIMD’s. B. HISTORY Until late 1991 both NADEP JAX and NADEP North Island (NORIS...TRIANGULAR OR ALL LOG NORMAL DISTRIBUTIONS FOR SERVICE TIMES AT AIND CECIL FIELD maintenance/ Triangular Log Normal MAZDA Difference Differe•ce Supply
Erosion associated with cable and tractor logging in northwestern California
R. M. Rice; P. A. Datzman
1981-01-01
Abstract - Erosion and site conditions were measured at 102 logged plots in northwestern California. Erosion averaged 26.8 m 3 /ha. A log-normal distribution was a better fit to the data. The antilog of the mean of the logarithms of erosion was 3.2 m 3 /ha. The Coast District Erosion Hazard Rating was a poor predictor of erosion related to logging. In a new equation...
Probing the galactic disk and halo. 2: Hot interstellar gas toward the inner galaxy star HD 156359
NASA Technical Reports Server (NTRS)
Sembach, Kenneth R.; Savage, Blair D.; Lu, Limin
1995-01-01
We present Goddard High Resolution Spectrograph intermediate-resolution measurements of the 1233-1256 A spectral region of HD 156396, a halo star at l = 328.7 deg, b = -14.5 deg in the inner Galaxy with a line-of sight distance of 11.1 kpc and a z-distance of -2.8 kpc. The data have a resolution of 18 km/s Full Width at Half Maximum (FWHM) and a signal-to-noise ratio of approximately 50:1. We detect interstellar lines of Mg II, S II, S II, Ge II, and N V and determine log N/(Mg II) = 15.78 +0.25, -0.27, log N(Si II) greater than 13.70, log N(S II) greater than 15.76, log N(Ge II) = 12.20 +0.09,-0.11, and log N(N v) = 14.06 +/- 0.02. Assuming solar reference abundances, the diffuse clouds containing Mg, S, and Ge along the sight line have average logarithmic depletions D(Mg) = -0.6 +/- 0.3 dex, D(S) greater than -0.2 dex, and D(Ge) = -0.2 +/- 0.2 dex. The Mg and Ge depletions are approximately 2 times smaller than is typical of diffuse clouds in the solar vicinity. Galactic rotational modeling of the N v profiles indicates that the highly ionized gas traced by this ion has a scale height of approximately 1 kpc if gas at large z-distances corotates with the underlying disk gas. Rotational modeling of the Si iv and C iv profiles measured by the IUE satellite yields similar scale height estimates. The scale height results contrast with previous studies of highly ionized gas in the outer Milky Way that reveal a more extended gas distribtion with h approximately equals 3-4 kpc. We detect a high-velocity feature in N v and Si II v(sub LSR) approximately equals + 125 km/s) that is probably created in an interface between warm and hot gas.
Peak Source Power Associated with Positive Narrow Bipolar Lightning Pulses
NASA Astrophysics Data System (ADS)
Bandara, S. A.; Marshall, T. C.; Karunarathne, S.; Karunarathne, N. D.; Siedlecki, R. D., II; Stolzenburg, M.
2017-12-01
During the summer of 2016, we deployed a lightning sensor array in and around Oxford Mississippi, USA. The array system comprised seven lightning sensing stations in a network approximately covering an area of 30 km × 30 km. Each station is equipped with four sensors: Fast antenna (10 ms decay time), Slow antenna (1.0 s decay time)), field derivative sensor (dE/dt) and Log-RF antenna (bandwidth 187-192 MHz). We have observed 319 Positive NBPs and herein we report on comparisons of the NBP properties measured from the Fast antenna data with the Log-RF antenna data. These properties include 10-90% rise time, full width at half maximum, zero cross time, and range-normalized amplitude at 100 km. NBPs were categorized according to the fine structure of the electric field wave shapes into Types A-D, as in Karunarathne et al. [2015]. The source powers of NBPs in each category were determined using single station Log-RF data. Furthermore, we also categorized the NBPs in three other groups: initial event of an IC flash, isolated, and not-isolated (according to their spatiotemporal relationship with other lightning activity). We compared the source powers within each category. Karunarathne, S., T. C. Marshall, M. Stolzenburg, and N. Karunarathna (2015), Observations of positive narrow bipolar pulses, J. Geophys. Res. Atmos., 120, doi:10.1002/2015JD023150.
Czopyk, L; Olko, P
2006-01-01
The analytical model of Xapsos used for calculating microdosimetric spectra is based on the observation that straggling of energy loss can be approximated by a log-normal distribution of energy deposition. The model was applied to calculate microdosimetric spectra in spherical targets of nanometer dimensions from heavy ions at energies between 0.3 and 500 MeV amu(-1). We recalculated the originally assumed 1/E(2) initial delta electrons spectrum by applying the Continuous Slowing Down Approximation for secondary electrons. We also modified the energy deposition from electrons of energy below 100 keV, taking into account the effective path length of the scattered electrons. Results of our model calculations agree favourably with results of Monte Carlo track structure simulations using MOCA-14 for light ions (Z = 1-8) of energy ranging from E = 0.3 to 10.0 MeV amu(-1) as well as with results of Nikjoo for a wall-less proportional counter (Z = 18).
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1978-01-01
Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Optimal Search for Moving Targets in Continuous Time and Space using Consistent Approximations
2011-09-01
σnNµ )q/ν M q = κ b −n log c̄ log b +B1B2 + K( b σnNµ )q/ν ( b σnNµ )q/ν M q = B2 [ κ B2b −n log c̄ log b +B1 + ( b σnNµ )q/ν B2M q Kσq/νnq/νNµq/ν bq/ν...IV.52) Since −nb log c̄ log b → a1 ∈ [ pq µq + νp ,∞), 106 the expression κ B2b −n log c̄ log b (IV.53) goes to 0, as b→∞. In addition, we know...Burden, R. (1993). Numerical methods. Boston, Massachusetts: PWS Publishing Company . Ghabcheloo, R., Kaminer, I., Aguiar, A., & Pascoal, A. (2009). A
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
NASA Astrophysics Data System (ADS)
Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro
2011-12-01
We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≃10-2.5Mpc-1 with the upper limit B≲3nG.
NASA Astrophysics Data System (ADS)
Duarte Queirós, Sílvio M.
2012-07-01
We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.
Fatigue shifts and scatters heart rate variability in elite endurance athletes.
Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire
2013-01-01
This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.
Selective logging in the Brazilian Amazon.
Asner, Gregory P; Knapp, David E; Broadbent, Eben N; Oliveira, Paulo J C; Keller, Michael; Silva, Jose N
2005-10-21
Amazon deforestation has been measured by remote sensing for three decades. In comparison, selective logging has been mostly invisible to satellites. We developed a large-scale, high-resolution, automated remote-sensing analysis of selective logging in the top five timber-producing states of the Brazilian Amazon. Logged areas ranged from 12,075 to 19,823 square kilometers per year (+/-14%) between 1999 and 2002, equivalent to 60 to 123% of previously reported deforestation area. Up to 1200 square kilometers per year of logging were observed on conservation lands. Each year, 27 million to 50 million cubic meters of wood were extracted, and a gross flux of approximately 0.1 billion metric tons of carbon was destined for release to the atmosphere by logging.
1984-07-01
field book for scale). Figure 2 (cont’d). Figure 3. Upstream portion of reach 2, 9 May 1980; USGS gauging station (A) and the approximate location...eral information was taken from maps, and site-specific data were obtained from the logs of wells drilled by the Corps of Engineers. The well log data...were drilled along or near this route, which runs approximately parallel to the bank, but not near the riverbank aL most locations (Fig. 1). The
The Approximability of Learning and Constraint Satisfaction Problems
2010-10-07
further improved this result to NP ⊆ naPCP1,3/4+²(O(log(n)),3). Around the same time, Zwick [141] showed that naPCP1,5/8(O(log(n)),3)⊆ BPP by giving a...randomized polynomial-time 5/8-approximation algorithm for satisfiable 3CSP. Therefore unless NP⊆ BPP , the best s must be bigger than 5/8. Zwick... BPP [141]. We think that Question 5.1.2 addresses an important missing part in understanding the 3-query PCP systems. In addition, as is mentioned the
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Iacumin, Lucilla; Manzano, Marisa; Comi, Giuseppe
2016-01-01
The anti-listerial activity of generally recognized as safe (GRAS) bacteriophage Listex P100 (phage P100) was demonstrated in broths and on the surface of slices of dry-cured ham against 5 strains or serotypes (i.e., Scott A, 1/2a, 1/2b, and 4b) of Listeria monocytogenes. In a broth model system, phage P100 at a concentration equal to or greater than 7 log PFU/mL completely inhibited 2 log CFU/cm2 or 3 log CFU/cm2 of L. monocytogenes growth at 30 °C. The temperature (4, 10, 20 °C) seemed to influence P100 activity; the best results were obtained at 4 °C. On dry-cured ham slices, a P100 concentration ranging from 5 to 8 log PFU/cm2 was required to obtain a significant reduction in L. monocytogenes. At 4, 10, and 20 °C, an inoculum of 8 log PFU/cm2 was required to completely eliminate 2 log L. monocytogenes/cm2 and to reach the absence in 25 g product according to USA food law. Conversely, it was impossible to completely eradicate L. monocytogenes with an inoculum of approximately of 3.0 and 4.0 log CFU/cm2 and with a P100 inoculum ranging from 1 to 7 log PFU/cm2. P100 remained stable on dry-cured ham slices over a 14-day storage period, with only a marginal loss of 0.2 log PFU/cm2 from an initial phage treatment of approximately 8 log PFU/cm2. Moreover, phage P100 eliminated free L. monocytogenes cells and biofilms on the machinery surfaces used for dry-cured ham production. These findings demonstrate that the GRAS bacteriophage Listex P100 at level of 8 log PFU/cm2 is listericidal and useful for reducing the L. monocytogenes concentration or eradicating the bacteria from dry-cured ham. PMID:27681898
Detection of contaminant plumes by bore hole geophysical logging
Mack, Thomas J.
1993-01-01
Two borehole geophysical methods—electromagnetic induction and natural gamma radiation logs—were used to vertically delineate landfill leachate plumes in a glacial aquifer. Geophysical logs of monitoring wells near two land-fills in a glacial aquifer in west-central Vermont show that borehole geophysical methods can aid in interpretation of geologic logs and placement of monitoring well screens to sample landfill leachate plumes.Zones of high electrical conductance were delineated from the electromagnetic log in wells near two landfills. Some of these zones were found to correlate with silt and clay units on the basis of drilling and gamma logs. Monitoring wells were screened specifically in zones of high electrical conductivity that did not correlate to a silt or clay unit. Zones of high electrical conductivity that did not correlate to a silt or clay unit were caused by the presence of ground water with a high specific conductance, generally from 1000 to 2370 μS/cm (microsiemens per centimeter at 25 degrees Celsius). Ambient ground water in the study area has a specific conductance of approximately 200 to 400 μS/cm. Landfill leachate plumes were found to be approximately 5 to 20 feet thick and to be near the water table surface.
Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I
2011-01-01
By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.
NASA Astrophysics Data System (ADS)
Matsubara, Yoshitsugu; Musashi, Yasuo
2017-12-01
The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.
Hot gas in the cold dark matter scenario: X-ray clusters from a high-resolution numerical simulation
NASA Technical Reports Server (NTRS)
Kang, Hyesung; Cen, Renyue; Ostriker, Jeremiah P.; Ryu, Dongsu
1994-01-01
A new, three-dimensional, shock-capturing hydrodynamic code is utilized to determine the distribution of hot gas in a standard cold dark matter (CDM) model of the universe. Periodic boundary conditions are assumed: a box with size 85 h(exp -1) Mpc having cell size 0.31 h(exp -1) Mpc is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, sigma(sub 8) = 1.05, omega(sub b) = 0.06, and assuming h = 0.5, we find the X-ray-emitting clusters and compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. We find that most of the total X-ray emissivity in our box originates in a relatively small number of identifiable clusters which occupy approximately 10(exp -3) of the box volume. This standard CDM model, normalized to COBE, produces approximately 5 times too much emission from clusters having L(sub x) is greater than 10(exp 43) ergs/s, a not-unexpected result. If all other parameters were unchanged, we would expect adequate agreement for sigma(sub 8) = 0.6. This provides a new and independent argument for lower small-scale power than standard CDM at the 8 h(exp -1) Mpc scale. The background radiation field at 1 keV due to clusters in this model is approximately one-third of the observed background, which, after correction for numerical effects, again indicates approximately 5 times too much emission and the appropriateness of sigma(sub 8) = 0.6. If we have used the observed ratio of gas to total mass in clusters, rather than basing the mean density on light-element nucleosynthesis, then the computed luminosity of each cluster would have increased still further, by a factor of approximately 10. The number density of clusters increases to z approximately 1, but the luminosity per typical cluster decreases, with the result that evolution in the number density of bright clusters is moderate in this redshift range, showing a broad peak near z = 0.7, and then a rapid decline above redshift z = 3. Detailed computations of the luminosity functions in the range L(sub x) = 10(exp 40) - 10(exp 44) ergs/s in various energy bands are presented for both cluster central regions and total luminosities to be used in comparison with ROSAT and other observational data sets. The quantitative results found disagree significantly with those found by other investigators using semianalytic techniques. We find little dependence of core radius on cluster luminosity and a dependence of temperature on luminosity given by log kT(sub x) = A + B log L(sub x), which is slightly steeper (B = 0.38) than is indicated by observations. Computed temperatures are somewhat higher than observed, as expected, in that COBE-normalized CDM has too much power on the relevant scales. A modest average temperature gradient is found, with temperatures dropping to 90% of central values at 0.4 h(exp -1) Mpc and 70% of central values at 0.9 h(exp -1) Mpc. Examining the ratio of gas to total mass in the clusters normalized to Omega(sub B) h(exp 2) = 0.015, and comparing with observations, we conclude, in agreement with White (1991), that the cluster observations argue for an open universe.
Nguyen, Hoang Anh; Denis, Olivier; Vergison, Anne; Theunis, Anne; Tulkens, Paul M; Struelens, Marc J; Van Bambeke, Françoise
2009-04-01
Small-colony variant (SCV) strains of Staphylococcus aureus show reduced antibiotic susceptibility and intracellular persistence, potentially explaining therapeutic failures. The activities of oxacillin, fusidic acid, clindamycin, gentamicin, rifampin, vancomycin, linezolid, quinupristin-dalfopristin, daptomycin, tigecycline, moxifloxacin, telavancin, and oritavancin have been examined in THP-1 macrophages infected by a stable thymidine-dependent SCV strain in comparison with normal-phenotype and revertant isogenic strains isolated from the same cystic fibrosis patient. The SCV strain grew slowly extracellularly and intracellularly (1- and 0.2-log CFU increase in 24 h, respectively). In confocal and electron microscopy, SCV and the normal-phenotype bacteria remain confined in acid vacuoles. All antibiotics tested, except tigecycline, caused a net reduction in bacterial counts that was both time and concentration dependent. At an extracellular concentration corresponding to the maximum concentration in human serum (total drug), oritavancin caused a 2-log CFU reduction at 24 h; rifampin, moxifloxacin, and quinupristin-dalfopristin caused a similar reduction at 72 h; and all other antibiotics had only a static effect at 24 h and a 1-log CFU reduction at 72 h. In concentration dependence experiments, response to oritavancin was bimodal (two successive plateaus of -0.4 and -3.1 log CFU); tigecycline, moxifloxacin, and rifampin showed maximal effects of -1.1 to -1.7 log CFU; and the other antibiotics produced results of -0.6 log CFU or less. Addition of thymidine restored intracellular growth of the SCV strain but did not modify the activity of antibiotics (except quinupristin-dalfopristin). All drugs (except tigecycline and oritavancin) showed higher intracellular activity against normal or revertant phenotypes than against SCV strains. The data may help rationalizing the design of further studies with intracellular SCV strains.
Tsao Wu, Maya; Armitage, M Diane; Trujillo, Claire; Trujillo, Anna; Arnold, Laura E; Tsao Wu, Lauren; Arnold, Robert W
2017-12-04
We needed to validate and calibrate our portable acuity screening tools so amblyopia could be detected quickly and effectively at school entry. Spiral-bound flip cards and download pdf surround HOTV acuity test box with critical lines were combined with a matching card. Amblyopic patients performed critical line, then threshold acuity which was then compared to patched E-ETDRS acuity. 5 normal subjects wore Bangerter foil goggles to simulate blur for comparative validation. The 31 treated amblyopic eyes showed: logMAR HOTV = 0.97(logMAR E-ETDRS)-0.04 r2 = 0.88. All but two (6%) fell less than 2 lines difference. The five showed logMAR HOTV = 1.09 ((logMAR E-ETDRS) + .15 r2 = 0.63. The critical-line, test box was 98% efficient at screening within one line of 20/40. These tools reliably detected acuity in treated amblyopic patients and Bangerter blurred normal subjects. These free and affordable tools provide sensitive screening for amblyopia in children from public, private and home schools. Changing "pass" criteria to 4 out of 5 would improve sensitivity with somewhat slower testing for all students.
Investigating uplift in the South-Western Barents Sea using sonic and density well log measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Ellis, M.
2014-12-01
Sediments in the Barents Sea have undergone large amounts of uplift due to Plio-Pleistoncene deglaciation as well as Palaeocene-Eocene Atlantic rifting. Uplift affects the reservoir quality, seal capacity and fluid migration. Therefore, it is important to gain reliable uplift estimates in order to evaluate the petroleum prospectivity properly. To this end, a number of quantification methods have been proposed, such as Apatite Fission Track Analysis (AFTA), and integration of seismic surveys with well log data. AFTA usually provides accurate uplift estimates, but the data is limited due to its high cost. While the seismic survey can provide good uplift estimate when well data is available for calibration, the uncertainty can be large in areas where there is little to no well data. We estimated South-Western Barents Sea uplift based on well data from the Norwegian Petroleum Directorate. Primary assumptions include time-irreversible shale compaction trends and a universal normal compaction trend for a specified formation. Sonic and density logs from two Cenozoic shale formation intervals, Kolmule and Kolje, were used for the study. For each formation, we studied logs of all released wells, and established exponential normal compaction trends based on a single well. That well was then deemed the reference well, and relative uplift can be calculated at other well locations based on the offset from the normal compaction trend. We found that the amount of uplift increases along the SW to NE direction, with a maximum difference of 1,447 m from the Kolje FM estimate, and 699 m from the Kolmule FM estimate. The average standard deviation of the estimated uplift is 130 m for the Kolje FM, and 160 m for the Kolmule FM using the density log. While results from density logs and sonic logs have good agreement in general, the density log provides slightly better results in terms of higher consistency and lower standard deviation. Our results agree with published papers qualitatively with some differences in the actual amount of uplifts. The results are considered to be more accurate due to the higher resolution of the log scale data that was used.
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Somers, Jeffrey T.; Feiveson, Alan H.; Leigh, R. John; Wood, Scott J.; Paloski, William H.; Kornilova, Ludmila
2006-01-01
We studied the ability to hold the eyes in eccentric horizontal or vertical gaze angles in 68 normal humans, age range 19-56. Subjects attempted to sustain visual fixation of a briefly flashed target located 30 in the horizontal plane and 15 in the vertical plane in a dark environment. Conventionally, the ability to hold eccentric gaze is estimated by fitting centripetal eye drifts by exponential curves and calculating the time constant (t(sub c)) of these slow phases of gazeevoked nystagmus. Although the distribution of time-constant measurements (t(sub c)) in our normal subjects was extremely skewed due to occasional test runs that exhibited near-perfect stability (large t(sub c) values), we found that log10(tc) was approximately normally distributed within classes of target direction. Therefore, statistical estimation and inference on the effect of target direction was performed on values of z identical with log10t(sub c). Subjects showed considerable variation in their eyedrift performance over repeated trials; nonetheless, statistically significant differences emerged: values of tc were significantly higher for gaze elicited to targets in the horizontal plane than for the vertical plane (P less than 10(exp -5), suggesting eccentric gazeholding is more stable in the horizontal than in the vertical plane. Furthermore, centrifugal eye drifts were observed in 13.3, 16.0 and 55.6% of cases for horizontal, upgaze and downgaze tests, respectively. Fifth percentile values of the time constant were estimated to be 10.2 sec, 3.3 sec and 3.8 sec for horizontal, upward and downward gaze, respectively. The difference between horizontal and vertical gazeholding may be ascribed to separate components of the velocity position neural integrator for eye movements, and to differences in orbital mechanics. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied in a clinical setting to patients who were exposed to environments that may have modified their central integrators and thus require monitoring. Patients with gaze-evoked nystagmus can be flagged by comparing to the above established normative criteria.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
Load and Time Dependence of Interfacial Chemical Bond-Induced Friction at the Nanoscale.
Tian, Kaiwen; Gosvami, Nitya N; Goldsby, David L; Liu, Yun; Szlufarska, Izabela; Carpick, Robert W
2017-02-17
Rate and state friction (RSF) laws are widely used empirical relationships that describe the macroscale frictional behavior of a broad range of materials, including rocks found in the seismogenic zone of Earth's crust. A fundamental aspect of the RSF laws is frictional "aging," where friction increases with the time of stationary contact due to asperity creep and/or interfacial strengthening. Recent atomic force microscope (AFM) experiments and simulations found that nanoscale silica contacts exhibit aging due to the progressive formation of interfacial chemical bonds. The role of normal load (and, thus, normal stress) on this interfacial chemical bond-induced (ICBI) friction is predicted to be significant but has not been examined experimentally. Here, we show using AFM that, for nanoscale ICBI friction of silica-silica interfaces, aging (the difference between the maximum static friction and the kinetic friction) increases approximately linearly with the product of the normal load and the log of the hold time. This behavior is attributed to the approximately linear dependence of the contact area on the load in the positive load regime before significant wear occurs, as inferred from sliding friction measurements. This implies that the average pressure, and thus the average bond formation rate, is load independent within the accessible load range. We also consider a more accurate nonlinear model for the contact area, from which we extract the activation volume and the average stress-free energy barrier to the aging process. Our work provides an approach for studying the load and time dependence of contact aging at the nanoscale and further establishes RSF laws for nanoscale asperity contacts.
Load and Time Dependence of Interfacial Chemical Bond-Induced Friction at the Nanoscale
NASA Astrophysics Data System (ADS)
Tian, Kaiwen; Gosvami, Nitya N.; Goldsby, David L.; Liu, Yun; Szlufarska, Izabela; Carpick, Robert W.
2017-02-01
Rate and state friction (RSF) laws are widely used empirical relationships that describe the macroscale frictional behavior of a broad range of materials, including rocks found in the seismogenic zone of Earth's crust. A fundamental aspect of the RSF laws is frictional "aging," where friction increases with the time of stationary contact due to asperity creep and/or interfacial strengthening. Recent atomic force microscope (AFM) experiments and simulations found that nanoscale silica contacts exhibit aging due to the progressive formation of interfacial chemical bonds. The role of normal load (and, thus, normal stress) on this interfacial chemical bond-induced (ICBI) friction is predicted to be significant but has not been examined experimentally. Here, we show using AFM that, for nanoscale ICBI friction of silica-silica interfaces, aging (the difference between the maximum static friction and the kinetic friction) increases approximately linearly with the product of the normal load and the log of the hold time. This behavior is attributed to the approximately linear dependence of the contact area on the load in the positive load regime before significant wear occurs, as inferred from sliding friction measurements. This implies that the average pressure, and thus the average bond formation rate, is load independent within the accessible load range. We also consider a more accurate nonlinear model for the contact area, from which we extract the activation volume and the average stress-free energy barrier to the aging process. Our work provides an approach for studying the load and time dependence of contact aging at the nanoscale and further establishes RSF laws for nanoscale asperity contacts.
The Effect of Temperature on the Survival of Microorganisms in a Deep Space Vacuum
NASA Technical Reports Server (NTRS)
Hagen, C. A.; Godfrey, J. F.; Green, R. H.
1971-01-01
A space molecular sink research facility (Molsink) was used to evaluate the ability of microorganisms to survive the vacuum of outer space. This facility could be programmed to simulate flight spacecraft vacuum environments at pressures in the .1 nanotorr range and thermal gradients (30 to 60 C) closely associated to surface temperatures of inflight spacecraft. Initial populations of Staphylococcus epidermidis and a Micrococcus sp. were reduced approximately 1 log while exposed to -105 and 34 C, and approximately 2 logs while exposed to 59 C for 14 days in the vacuum environment. Spores of Bacillus subtilis var. niger were less affected by the environment. Initial spore populations were reduced 0.2, 0.3, and 0.8 log during the 14-day vacuum exposure at -124, 34, and 59 C, respectively.
Feeding and Feedback in the Powerful Radio Galaxy 3C 120
NASA Technical Reports Server (NTRS)
Tombesi, F.; Mushotzky, R. F.; Reynolds, C. S.; Kallman, T.; Reeves, J. N.; Braito, V.; Ueda, Y.; Leutenegger, M. A.; Williams, B. J.; Stawarz, L.;
2017-01-01
We present a spectral analysis of a 200-kilosecond observation of the broad-line radio galaxy 3C 120, performed with the high-energy transmission grating spectrometer on board the Chandra X-Ray Observatory. We find (i) a neutral absorption component intrinsic to the source with a column density of log N (sub H) equals 20.67 plus or minus 0.05 square centimeters; (ii) no evidence for a warm absorber (WA) with an upper limit on the column density of just log N (sub H) less than 19.7 square centimeters, assuming the typical ionization parameter log xi approximately equal to 2.5 ergs per second per centimeter; the WA may instead be replaced by (iii) a hot emitting gas with a temperature kT approximately equal to 0.7 kiloelectronvolts observed as soft X-ray emission from ionized Fe L-shell lines, which may originate from a kiloparsec-scale shocked bubble inflated by the active galactic nucleus (AGN) wind or jet with a shock velocity of about 1000 kilometers per second determined by the emission line width; (iv) a neutral Fe K alpha line and accompanying emission lines indicative of a Compton-thick cold reflector with a low reflection fraction R approximately equal to 0.2, suggesting a large opening angle of the torus; (v) a highly ionized Fe XXV emission feature indicative of photoionized gas with an ionization parameter log xi equal to 3.75 (sup plus 0.38) (sub minus 0.27) ergs per second per centimeter and a column density of log N (sub H) greater than 22 square centimeters localized within approximately 2 pc from the X-ray source; and (vi) possible signatures of a highly ionized disk wind. Together with previous evidence for intense molecular line emission, these results indicate that 3C 120 is likely a late-state merger undergoing strong AGN feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Gary E.; Song, Joo Hyun; Lu, Wei
2007-06-15
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less
Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A
2007-06-01
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.
Lee, Myung W.
2012-01-01
Through the use of three-dimensional seismic amplitude mapping, several gas hydrate prospects were identified in the Alaminos Canyon area of the Gulf of Mexico. Two of the prospects were drilled as part of the Gulf of Mexico Gas Hydrate Joint Industry Program Leg II in May 2009, and a suite of logging-while-drilling logs was acquired at each well site. Logging-while-drilling logs at the Alaminos Canyon 21–A site indicate that resistivities of approximately 2 ohm-meter and P-wave velocities of approximately 1.9 kilometers per second were measured in a possible gas-hydrate-bearing target sand interval between 540 and 632 feet below the sea floor. These values are slightly elevated relative to those measured in the hydrate-free sediment surrounding the sands. The initial well log analysis is inconclusive in determining the presence of gas hydrate in the logged sand interval, mainly because large washouts in the target interval degraded well log measurements. To assess gas-hydrate saturations, a method of compensating for the effect of washouts on the resistivity and acoustic velocities is required. To meet this need, a method is presented that models the washed-out portion of the borehole as a vertical layer filled with seawater (drilling fluid). Owing to the anisotropic nature of this geometry, the apparent anisotropic resistivities and velocities caused by the vertical layer are used to correct measured log values. By incorporating the conventional marine seismic data into the well log analysis of the washout-corrected well logs, the gas-hydrate saturation at well site AC21–A was estimated to be in the range of 13 percent. Because gas hydrates in the vertical fractures were observed, anisotropic rock physics models were also applied to estimate gas-hydrate saturations.
Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya
2002-04-01
In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.
Predicting clicks of PubMed articles.
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.
Predicting clicks of PubMed articles
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386
Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes
Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire
2013-01-01
Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198
Value of the future: Discounting in random environments
NASA Astrophysics Data System (ADS)
Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep
2015-05-01
We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.
Value of the future: Discounting in random environments.
Farmer, J Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep
2015-05-01
We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Is isotropic turbulent diffusion symmetry restoring?
NASA Astrophysics Data System (ADS)
Effinger, H.; Grossmann, S.
1984-07-01
The broadening of a cloud of marked particle pairs in longitudinal and transverse directions relative to the initial separation in fully developed isotropic turbulent flow is evaluated on the basis of the unified theory of turbulent relative diffusion of Grossmann and Procaccia (1984). The closure assumption of the theory is refined; its validity is confirmed by comparing experimental data; approximate analytical expressions for the traces of variance and asymmetry in the inertial subrange are obtained; and intermittency is treated using a log-normal model. The difference between the longitudinal and transverse components of the variance tensor is shown to tend to a finite nonzero limit dependent on the radial distribution of the cloud. The need for further measurements and the implications for studies of particle waste in air or water are indicated.
Rabin, Jeff C; Karunathilake, Nirmani; Patrizi, Korey
2018-04-26
Consumption of dark chocolate can improve blood flow, mood, and cognition in the short term, but little is known about the possible effects of dark chocolate on visual performance. To compare the short-term effects of consumption of dark chocolate with those of milk chocolate on visual acuity and large- and small-letter contrast sensitivity. A randomized, single-masked crossover design was used to assess short-term visual performance after consumption of a dark or a milk chocolate bar. Thirty participants without pathologic eye disease each consumed dark and milk chocolate in separate sessions, and within-participant paired comparisons were used to assess outcomes. Testing was conducted at the Rosenberg School of Optometry from June 25 to August 15, 2017. Visual acuity (in logMAR units) and large- and small-letter contrast sensitivity (in the log of the inverse of the minimum detectable contrast [logCS units]) were measured 1.75 hours after consumption of dark and milk chocolate bars. Among the 30 participants (9 men and 21 women; mean [SD] age, 26 [5] years), small-letter contrast sensitivity was significantly higher after consumption of dark chocolate (mean [SE], 1.45 [0.04] logCS) vs milk chocolate (mean [SE], 1.30 [0.05] logCS; mean improvement, 0.15 logCS [95% CI, 0.08-0.22 logCS]; P < .001). Large-letter contrast sensitivity was slightly higher after consumption of dark chocolate (mean [SE], 2.05 [0.02] logCS) vs milk chocolate (mean [SE], 2.00 [0.02] logCS; mean improvement, 0.05 logCS [95% CI, 0.00-0.10 logCS]; P = .07). Visual acuity improved slightly after consumption of dark chocolate (mean [SE], -0.22 [0.01] logMAR; visual acuity, approximately 20/12) and milk chocolate (mean [SE], -0.18 [0.01] logMAR; visual acuity, approximately 20/15; mean improvement, 0.04 logMAR [95% CI, 0.02-0.06 logMAR]; P = .05). Composite scores combining results from all tests showed significant improvement after consumption of dark compared with milk chocolate (mean improvement, 0.20 log U [95% CI, 0.10-0.30 log U]; P < .001). Contrast sensitivity and visual acuity were significantly higher 2 hours after consumption of a dark chocolate bar compared with a milk chocolate bar, but the duration of these effects and their influence in real-world performance await further testing. clinicaltrials.gov Identifier: NCT03326934.
Estimating the Aqueous Solubility of Pharmaceutical Hydrates
Franklin, Stephen J.; Younis, Usir S.; Myrdal, Paul B.
2016-01-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. PMID:27238488
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
Generating log-normal mock catalog of galaxies in redshift space
NASA Astrophysics Data System (ADS)
Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro
2017-10-01
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
Chapter 9:Red maple lumber resources for glued-laminated timber beams
John J. Janowiak; Harvey B. Manbeck; Roland Hernandez; Russell C. Moody
2005-01-01
This chapter evaluates the performance of red maple glulam beams made from two distinctly different lumber resources: 1. logs sawn using practices normally used for hardwood appearance lumber recovery; and 2. lower-grade, smaller-dimension lumber primarily obtained from residual log cants.
Distribution of transvascular pathway sizes through the pulmonary microvascular barrier.
McNamee, J E
1987-01-01
Mathematical models of solute and water exchange in the lung have been helpful in understanding factors governing the volume flow rate and composition of pulmonary lymph. As experimental data and models become more encompassing, parameter identification becomes more difficult. Pore sizes in these models should approach and eventually become equivalent to actual physiological pathway sizes as more complex and accurate models are tried. However, pore sizes and numbers vary from model to model as new pathway sizes are added. This apparent inconsistency of pore sizes can be explained if it is assumed that the pulmonary blood-lymph barrier is widely heteroporous, for example, being composed of a continuous distribution of pathway sizes. The sieving characteristics of the pulmonary barrier are reproduced by a log normal distribution of pathway sizes (log mean = -0.20, log s.d. = 1.05). A log normal distribution of pathways in the microvascular barrier is shown to follow from a rather general assumption about the nature of the pulmonary endothelial junction.
White noise analysis of Phycomyces light growth response system. I. Normal intensity range.
Lipson, E D
1975-01-01
The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444
Baseline MNREAD Measures for Normally Sighted Subjects From Childhood to Old Age
Calabrèse, Aurélie; Cheong, Allen M. Y.; Cheung, Sing-Hang; He, Yingchen; Kwon, MiYoung; Mansfield, J. Stephen; Subramanian, Ahalya; Yu, Deyue; Legge, Gordon E.
2016-01-01
Purpose The continuous-text reading-acuity test MNREAD is designed to measure the reading performance of people with normal and low vision. This test is used to estimate maximum reading speed (MRS), critical print size (CPS), reading acuity (RA), and the reading accessibility index (ACC). Here we report the age dependence of these measures for normally sighted individuals, providing baseline data for MNREAD testing. Methods We analyzed MNREAD data from 645 normally sighted participants ranging in age from 8 to 81 years. The data were collected in several studies conducted by different testers and at different sites in our research program, enabling evaluation of robustness of the test. Results Maximum reading speed and reading accessibility index showed a trilinear dependence on age: first increasing from 8 to 16 years (MRS: 140–200 words per minute [wpm]; ACC: 0.7–1.0); then stabilizing in the range of 16 to 40 years (MRS: 200 ± 25 wpm; ACC: 1.0 ± 0.14); and decreasing to 175 wpm and 0.88 by 81 years. Critical print size was constant from 8 to 23 years (0.08 logMAR), increased slowly until 68 years (0.21 logMAR), and then more rapidly until 81 years (0.34 logMAR). logMAR reading acuity improved from −0.1 at 8 years to −0.18 at 16 years, then gradually worsened to −0.05 at 81 years. Conclusions We found a weak dependence of the MNREAD parameters on age in normal vision. In broad terms, MNREAD performance exhibits differences between three age groups: children 8 to 16 years, young adults 16 to 40 years, and middle-aged to older adults >40 years. PMID:27442222
Application of edible coating with starch and carvacrol in minimally processed pumpkin.
Santos, Adriele R; da Silva, Alex F; Amaral, Viviane C S; Ribeiro, Alessandra B; de Abreu Filho, Benicio A; Mikcha, Jane M G
2016-04-01
The present study evaluated the effect of an edible coating of cassava starch and carvacrol in minimally processed pumpkin (MPP). The minimal inhibitory concentration (MIC) of carvacrol against Escherichia coli, Salmonella enterica serotype Typhimurium, Aeromonas hydrophila, and Staphylococcus aureus was determined. The edible coating that contained carvacrol at the MIC and 2 × MIC was applied to MPP, and effects were evaluated with regard to the survival of experimentally inoculated bacteria and autochthonous microflora in MPP. Total titratable acidity, pH, weight loss, and soluble solids over 7 days of storage under refrigeration was also analyzed. MIC of carvacrol was 312 μg/ml. Carvacrol at the MIC reduced the counts of E. coli and S. Typhimurium by approximately 5 log CFU/g. A. hydrophila was reduced by approximately 8 log CFU/g, and S. aureus was reduced by approximately 2 log CFU/g on the seventh day of storage. Carvacrol at the 2 × MIC completely inhibited all isolates on the first day of Storage. coliforms at 35 °C and 45 °C were not detected (< 3 MPN/g) with either treatment on all days of shelf life. The treatment groups exhibited a reduction of approximately 2 log CFU/g in psychrotrophic counts compared with controls on the last day of storage. Yeast and mold were not detected with either treatment over the same period. The addition of carvacrol did not affect total titratable acidity, pH, or soluble solids and improved weight loss. The edible coating of cassava starch with carvacrol may be an interesting approach to improve the safety and microbiological quality of MPP.
Efficacy of chlorine dioxide against Listeria monocytogenes in brine chilling solutions.
Valderrama, W B; Mills, E W; Cutter, C N
2009-11-01
Chilled brine solutions are used by the food industry to rapidly cool ready-to-eat meat products after cooking and before packaging. Chlorine dioxide (ClO(2)) was investigated as an antimicrobial additive to eliminate Listeria monocytogenes. Several experiments were performed using brine solutions made of sodium chloride (NaCl) and calcium chloride (CaCl(2)) inoculated with L. monocytogenes and/or treated with 3 ppm of ClO(2). First, 10 and 20% CaCl(2) and NaCl solutions (pH 7.0) were inoculated with a five-strain cocktail of L. monocytogenes to obtain approximately 7 log CFU/ml and incubated 8 h at 0 degrees C. The results demonstrated that L. monocytogenes survived in 10% CaCl(2), 10 and 20% NaCl, and pure water. L. monocytogenes levels were reduced approximately 1.2 log CFU/ml in 20% CaCl(2). Second, inoculated ( approximately 7 log CFU/ml) brine solutions (10 and 20% NaCl and 10% CaCl(2)) treated with 3 ppm of ClO(2) resulted in a approximately 4-log reduction of the pathogen within 90 s. The same was not observed in a solution of 20% CaCl(2); further investigation demonstrated that high levels of divalent cations interfere with the disinfectant. Spent brine solutions from hot dog and ham chilling were treated with ClO(2) at concentrations of 3 or 30 ppm. At these concentrations, ClO(2) did not reduce L. monocytogenes. Removal of divalent cations and organic material in brine solutions prior to disinfection with ClO(2) should be investigated to improve the efficacy of the compound against L. monocytogenes. The information from this study may be useful to processing establishments and researchers who are investigating antimicrobials in chilling brine solutions.
Gómez-Aldapa, Carlos A; Rangel-Vargas, Esmeralda; Gordillo-Martínez, Alberto J; Castro-Rosas, Javier
2014-06-01
The behavior of enterotoxigenic Escherichia coli (ETEC), enteropathogenic E. coli (EPEC), enteroinvasive E. coli (EIEC) and non-O157 shiga toxin-producing E. coli (non-O157-STEC) on whole and slices of jalapeño and serrano peppers as well as in blended sauce at 25 ± 2 °C and 3 ± 2 °C was investigated. Chili peppers were collected from markets of Pachuca city, Hidalgo, Mexico. On whole serrano and jalapeño stored at 25 ± 2 °C or 3 ± 2 °C, no growth was observed for EPEC, ETEC, EIEC and non-O157-STEC rifampicin resistant strains. After twelve days at 25 ± 2 °C, on serrano peppers all diarrheagenic E. coli pathotypes (DEP) strains had decreased by a total of approximately 3.7 log, whereas on jalapeño peppers the strains had decreased by approximately 2.8 log, and at 3 ± 2 °C they decreased to approximately 2.5 and 2.2 log respectively, on serrano and jalapeño. All E. coli pathotypes grew onto sliced chili peppers and in blended sauce: after 24 h at 25 ± 2 °C, all pathotypes had grown to approximately 3 and 4 log CFU on pepper slices and sauce, respectively. At 3 ± 2 °C the bacterial growth was inhibited. Copyright © 2014 Elsevier Ltd. All rights reserved.
Castro, Jorge; Moreno-Rueda, Gregorio; Hódar, José A
2010-06-01
There is an intense debate about the effects of postfire salvage logging versus nonintervention policies on regeneration of forest communities, but scant information from experimental studies is available. We manipulated a burned forest area on a Mediterranean mountain to experimentally analyze the effect of salvage logging on bird-species abundance, diversity, and assemblage composition. We used a randomized block design with three plots of approximately 25 ha each, established along an elevational gradient in a recently burned area in Sierra Nevada Natural and National Park (southeastern Spain). Three replicates of three treatments differing in postfire burned wood management were established per plot: salvage logging, nonintervention, and an intermediate degree of intervention (felling and lopping most of the trees but leaving all the biomass). Starting 1 year after the fire, we used point sampling to monitor bird abundance in each treatment for 2 consecutive years during the breeding and winter seasons (720 censuses total). Postfire burned-wood management altered species assemblages. Salvage logged areas had species typical of open- and early-successional habitats. Bird species that inhabit forests were still present in the unsalvaged treatments even though trees were burned, but were almost absent in salvage-logged areas. Indeed, the main dispersers of mid- and late-successional shrubs and trees, such as thrushes (Turdus spp.) and the European Jay (Garrulus glandarius) were almost restricted to unsalvaged treatments. Salvage logging might thus hamper the natural regeneration of the forest through its impact on assemblages of bird species. Moreover, salvage logging reduced species abundance by 50% and richness by 40%, approximately. The highest diversity at the landscape level (gamma diversity) resulted from a combination of all treatments. Salvage logging may be positive for bird conservation if combined in a mosaic with other, less-aggressive postfire management, but stand-wide management with harvest operations has undesirable conservation effects.
Thackeray, J F; Dykes, S
2016-02-01
Thackeray has previously explored the possibility of using a morphometric approach to quantify the "amount" of variation within species and to assess probabilities of conspecificity when two fossil specimens are compared, instead of "pigeon-holing" them into discrete species. In an attempt to obtain a statistical (probabilistic) definition of a species, Thackeray has recognized an approximation of a biological species constant (T=-1.61) based on the log-transformed standard error of the coefficient m (log sem) in regression analysis of cranial and other data from pairs of specimens of conspecific extant species, associated with regression equations of the form y=mx+c where m is the slope and c is the intercept, using measurements of any specimen A (x axis), and any specimen B of the same species (y axis). The log-transformed standard error of the co-efficient m (log sem) is a measure of the degree of similarity between pairs of specimens, and in this study shows central tendency around a mean value of -1.61 and standard deviation 0.10 for modern conspecific specimens. In this paper we focus attention on the need to take into account the range of difference in log sem values (Δlog sem or "delta log sem") obtained from comparisons when specimen A (x axis) is compared to B (y axis), and secondly when specimen A (y axis) is compared to B (x axis). Thackeray's approach can be refined to focus on high probabilities of conspecificity for pairs of specimens for which log sem is less than -1.61 and for which Δlog sem is less than 0.03. We appeal for the adoption of a concept here called "sigma taxonomy" (as opposed to "alpha taxonomy"), recognizing that boundaries between species are not always well defined. Copyright © 2015 Elsevier GmbH. All rights reserved.
Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A
1988-12-01
Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.
Testing and analysis of internal hardwood log defect prediction models
R. Edward Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R
2017-07-01
We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.
Avian responses to selective logging shaped by species traits and logging practices
Burivalova, Zuzana; Lee, Tien Ming; Giam, Xingli; Şekercioğlu, Çağan Hakkı; Wilcove, David S.; Koh, Lian Pin
2015-01-01
Selective logging is one of the most common forms of forest use in the tropics. Although the effects of selective logging on biodiversity have been widely studied, there is little agreement on the relationship between life-history traits and tolerance to logging. In this study, we assessed how species traits and logging practices combine to determine species responses to selective logging, based on over 4000 observations of the responses of nearly 1000 bird species to selective logging across the tropics. Our analysis shows that species traits, such as feeding group and body mass, and logging practices, such as time since logging and logging intensity, interact to influence a species' response to logging. Frugivores and insectivores were most adversely affected by logging and declined further with increasing logging intensity. Nectarivores and granivores responded positively to selective logging for the first two decades, after which their abundances decrease below pre-logging levels. Larger species of omnivores and granivores responded more positively to selective logging than smaller species from either feeding group, whereas this effect of body size was reversed for carnivores, herbivores, frugivores and insectivores. Most importantly, species most negatively impacted by selective logging had not recovered approximately 40 years after logging cessation. We conclude that selective timber harvest has the potential to cause large and long-lasting changes in avian biodiversity. However, our results suggest that the impacts can be mitigated to a certain extent through specific forest management strategies such as lengthening the rotation cycle and implementing reduced impact logging. PMID:25994673
A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners
2013-08-01
best-suited for regression. Our baseline uses z-normalized shallow length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari...length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari, English and Pashto. We compare Support Vector Machines and the Margin...football, whereas they are much less common in documents about opera). We used TF -LOG weighted word frequencies on bag-of-words for each document
Ventilation-perfusion distribution in normal subjects.
Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A
2012-09-01
Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.
Raghu, S; Sriraam, N; Kumar, G Pradeep
2017-02-01
Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J
2017-07-17
Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Generating log-normal mock catalog of galaxies in redshift space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Aniket; Makiya, Ryu; Saito, Shun
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less
Assessment of visual disability using visual evoked potentials.
Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun
2012-08-06
The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability.
Assessment of visual disability using visual evoked potentials
2012-01-01
Background The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. Methods A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9–42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19–36 years), 19 optic neuritis patients (19 eyes: ages 9–71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Results Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = −0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = −0.072x + 1.22 (−0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Conclusions Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability. PMID:22866948
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Wane detection on rough lumber using surface approximation
Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt
2000-01-01
The initial breakdown of hardwood logs into lumber produces boards with rough surfaces. These boards contain wane (missing wood due to the curved log exterior) that is removed by edge and trim cuts prior to sale. Because hardwood lumber value is determined using a combination of board size and quality, knowledge of wane position and defects is essential for selecting...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-10
... and are expected to occur for approximately 20 days over the course of this work window. Other work... log boom haul-outs located in the action area. Other potential disturbance could result from the..., primarily by flushing seals off log booms, or by causing short-term avoidance of the area or similar short...
Estimating the Aqueous Solubility of Pharmaceutical Hydrates.
Franklin, Stephen J; Younis, Usir S; Myrdal, Paul B
2016-06-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study, an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Robust recognition of loud and Lombard speech in the fighter cockpit environment
NASA Astrophysics Data System (ADS)
Stanton, Bill J., Jr.
1988-08-01
There are a number of challenges associated with incorporating speech recognition technology into the fighter cockpit. One of the major problems is the wide range of variability in the pilot's voice. That can result from changing levels of stress and workload. Increasing the training set to include abnormal speech is not an attractive option because of the innumerable conditions that would have to be represented and the inordinate amount of time to collect such a training set. A more promising approach is to study subsets of abnormal speech that have been produced under controlled cockpit conditions with the purpose of characterizing reliable shifts that occur relative to normal speech. Such was the initiative of this research. Analyses were conducted for 18 features on 17671 phoneme tokens across eight speakers for normal, loud, and Lombard speech. It was discovered that there was a consistent migration of energy in the sonorants. This discovery of reliable energy shifts led to the development of a method to reduce or eliminate these shifts in the Euclidean distances between LPC log magnitude spectra. This combination significantly improved recognition performance of loud and Lombard speech. Discrepancies in recognition error rates between normal and abnormal speech were reduced by approximately 50 percent for all eight speakers combined.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Stochastic modelling of non-stationary financial assets
NASA Astrophysics Data System (ADS)
Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.
2017-11-01
We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Probing star formation relations of mergers and normal galaxies across the CO ladder
NASA Astrophysics Data System (ADS)
Greve, Thomas R.
We examine integrated luminosity relations between the IR continuum and the CO rotational ladder observed for local (ultra) luminous infra-red galaxies ((U)LIRGs, L IR >= 1011 M⊙) and normal star forming galaxies in the context of radiation pressure regulated star formation proposed by Andrews & Thompson (2011). This can account for the normalization and linear slopes of the luminosity relations (log L IR = α log L'CO + β) of both low- and high-J CO lines observed for normal galaxies. Super-linear slopes occur for galaxy samples with significantly different dense gas fractions. Local (U)LIRGs are observed to have sub-linear high-J (J up > 6) slopes or, equivalently, increasing L COhigh-J /L IR with L IR. In the extreme ISM conditions of local (U)LIRGs, the high-J CO lines no longer trace individual hot spots of star formation (which gave rise to the linear slopes for normal galaxies) but a more widespread warm and dense gas phase mechanically heated by powerful supernovae-driven turbulence and shocks.
Paillet, Frederick; Duncanson, Russell
1994-01-01
The most extensive data base for fractured bedrock aquifers consists of drilling reports maintained by various state agencies. We investigated the accuracy and reliability of such reports by comparing a representative set of reports for nine wells drilled by conventional air percussion methods in granite with a suite of geophysical logs for the same wells designed to identify the depths of fractures intersecting the well bore which may have produced water during aquifer tests. Production estimates reported by the driller ranged from less than 1 to almost 10 gallons per minute. The moderate drawdowns maintained during subsequent production tests were associated with approximately the same flows as those measured when boreholes were dewatered during air percussion drilling. We believe the estimates of production during drilling and drawdown tests were similar because partial fracture zone dewatering during drilling prevented larger inflows otherwise expected from the steeper drawdowns during drilling. The fractures and fracture zones indicated on the drilling report and the amounts of water produced by these fractures during drilling generally agree with those identified from the geophysical log analysis. Most water production occurred from two fractured and weathered zones which are separated by an interval of unweathered granite. The fractures identified in the drilling reports show various depth discrepancies in comparison to the geophysical logs, which are subject to much better depth control. However, the depths of the fractures associated with water production on the drilling report are comparable to the depths of the fractures shown to be the source of water inflow in the geophysical log analysis. Other differences in the relative contribution of flow from fracture zones may by attributed to the differences between the hydraulic conditions during drilling, which represent large, prolonged drawdowns, and pumping tests, which consisted of smaller drawdowns maintained over shorter periods. We conclude that drilling reports filed by experienced well drillers contain useful information about the depth, thickness, degree of weathering, and production capacity of fracture zones supplying typical domestic water wells. The accuracy of this information could be improved if relatively simple and inexpensive geophysical well logs such as gamma, caliper, and normal resistivity logs were routinely run in conjunction with bedrock drilling projects.
Opacity, metallicity, and Cepheid period ratios in the galaxy and Magellanic Clouds
NASA Technical Reports Server (NTRS)
Simon, Norman R.; Kanbur, Shashi M.
1994-01-01
Linear pulsation calculations are employed to reproduce the bump Cepheid resonance (P(sub 2)/P(sub 0) = 0.5 at P(sub 0) approximately equal to 10 days) and to model, individually, the P(sub 1)/P(sub 0) period ratios for the dozen known Galactic beat Cepheids. Convection is ignored. The results point to a range of metallicity among the Cepheids, perhaps as large as 0.01 approximately less than Z approximately less than 0.02, with no evidence for any star exceeding Z = 0.02. We find masses and luminosities which range from M approximately less than 4 solar mass, log(base 10) approximately less than 3.0 at P(sub 0) approximately equal to 3 days to M approximately less than 6 solar mass, log(base 10) L approximately greater than 3.5 at P(sub 0) approximately equal to 10 days. Similar parameters are indicated for the P(sub 0) approximately equal to 10 days Cepheids in the LMC and SMC, provided that the resonance for these stars occurs at a slightly longer period, P(sub 0) days, as has been suggested in the literature. Our calculations were performed mainly using OPAL opacities, but also with new opacities from the Opacity project (OP). Only small differences were found between the OPAL results and those from OP. Finally, some suggestions are made for possible future work, including evolution and pulsation calculations, and more precise observations of Cepheids in the Magellanic Clouds.
Inatsu, Yasuhiro; Bari, Md Latiful; Kawasaki, Susumu; Isshiki, Kenji; Kawamoto, Shinichi
2005-02-01
Efficacy of acidified sodium chlorite for reducing the population of Escherichia coli O157:H7 pathogens on Chinese cabbage leaves was evaluated. Washing leaves with distilled water could reduce the population of E. coli O157:H7 by approximately 1.0 log CFU/g, whereas treating with acidified chlorite solution could reduce the population by 3.0 log CFU/g without changing the leaf color. A similar level of reduction was achieved by washing with sodium chlorite solution containing various organic acids. However, acidified sodium chlorite in combination with a mild heat treatment reduced the population by approximately 4.0 log CFU/g without affecting the color, but it softened the leaves. Moreover, the efficacy of the washing treatment was similar at low (4 degrees C) and room (25 degrees C) temperatures, indicating that acidified sodium chloride solution could be useful as a sanitizer for surface washing of fresh produce.
Rajasekaran, Sanguthevar
2013-01-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is Ω(log(n)log(log(n))) (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing Θ(log(n)log(log(n))) unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio (α:β), with high probability, using Θ(α + β) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85–94, 2009)—which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling. This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP ’09, Springer-Verlag, pp 235–253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log(n)) and is optimal on the probabilistic tile assembly model (PTAM)—n being an upper bound on the dimensions of a rectangle. PMID:24311993
210Po Log-normal distribution in human urines: Survey from Central Italy people
Sisti, D.; Rocchi, M. B. L.; Meli, M. A.; Desideri, D.
2009-01-01
The death in London of the former secret service agent Alexander Livtinenko on 23 November 2006 generally attracted the attention of the public to the rather unknown radionuclide 210Po. This paper presents the results of a monitoring programme of 210Po background levels in the urines of noncontaminated people living in Central Italy (near the Republic of S. Marino). The relationship between age, sex, years of smoking, number of cigarettes per day, and 210Po concentration was also studied. The results indicated that the urinary 210Po concentration follows a surprisingly perfect Log-normal distribution. Log 210Po concentrations were positively correlated to age (p < 0.0001), number of daily smoked cigarettes (p = 0.006), and years of smoking (p = 0.021), and associated to sex (p = 0.019). Consequently, this study provides upper reference limits for each sub-group identified by significantly predictive variables. PMID:19750019
Structural lumber laminated from 1/4-inch rotary-peeled southern pine veneer
Peter Koch
1972-01-01
By the lamination process evaluated, 60 percent of total log volume ended as kiln-dry, end-trimmed, sized, salable 2 by 4's - approximately 50 percent more than that acheived by conventional bandsawing of matched logs. Moreover, modulus of elasticity of the laminated 2 by 4's (adjusted to 12 percent moisture content) averaged 1,950,000 psi compared to 1,790,...
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.
2016-07-01
Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.
Gómez-Novo, Miriam; Boga, José A; Álvarez-Argüelles, Marta E; Rojo-Alba, Susana; Fernández, Ana; Menéndez, María J; de Oña, María; Melón, Santiago
2018-05-01
Human respiratory syncytial virus (HRSV) is a common cause of respiratory infections. The main objective is to analyze the prediction ability of viral load of HRSV normalized by cell number in respiratory symptoms. A prospective, descriptive, and analytical study was performed. From 7307 respiratory samples processed between December 2014 to April 2016, 1019 HRSV-positive samples, were included in this study. Low respiratory tract infection was present in 729 patients (71.54%). Normalized HRSV load was calculated by quantification of HRSV genome and human β-globin gene and expressed as log10 copies/1000 cells. HRSV mean loads were 4.09 ± 2.08 and 4.82 ± 2.09 log10 copies/1000 cells in the 549 pharyngeal and 470 nasopharyngeal samples, respectively (P < 0.001). The viral mean load was 4.81 ± 1.98 log10 copies/1000 cells for patients under the age of 4-year-old (P < 0.001). The viral mean loads were 4.51 ± 2.04 cells in patients with low respiratory tract infection and 4.22 ± 2.28 log10 copies/1000 cells with upper respiratory tract infection or febrile syndrome (P < 0.05). A possible cut off value to predict LRTI evolution was tentatively established. Normalization of viral load by cell number in the samples is essential to ensure an optimal virological molecular diagnosis avoiding that the quality of samples affects the results. A high viral load can be a useful marker to predict disease progression. © 2018 Wiley Periodicals, Inc.
Estimation of norovirus infection risks to consumers of wastewater-irrigated food crops eaten raw.
Mara, Duncan; Sleigh, Andrew
2010-03-01
A quantitative microbial risk analysis-Monte Carlo method was used to estimate norovirus infection risks to consumers of wastewater-irrigated lettuce. Using the same assumptions as used in the 2006 WHO guidelines for the safe use of wastewater in agriculture, a norovirus reduction of 6 log units was required to achieve a norovirus infection risk of approximately 10(-3) per person per year (pppy), but for a lower consumption of lettuce (40-48 g per week vs. 350 g per week) the required reduction was 5 log units. If the tolerable additional disease burden is increased from a DALY (disability-adjusted life year) loss of 10(-6) pppy (the value used in the WHO guidelines) to 10(-5) pppy, the required pathogen reduction is one order of magnitude lower. Reductions of 4-6 log units can be achieved by very simple partial treatment (principally settling to achieve a 1-log unit reduction) supplemented by very reliable post-treatment health-protection control measures such as pathogen die-off (1-2 log units), produce washing in cold water (1 log unit) and produce disinfection (3 log units).
Index selection in terminal sires improves lamb performance at finishing.
Márquez, G C; Haresign, W; Davies, M H; Roehe, R; Bünger, L; Simm, G; Lewis, R M
2013-01-01
Lamb meat is often perceived by consumers as fatty, and consumption has decreased in recent decades. A lean growth index was developed in the UK for terminal sire breeds to increase carcass lean content and constrain fat content at a constant age end point. The purposes of this study were 1) to evaluate the effects of index selection of terminal sires on their crossbred offspring at finishing and 2) to evaluate its effectiveness within terminal sire breeds. Approximately 70% of lambs marketed in the UK have been sired by rams of breeds typically thought of as specialized terminal sires. The most widely used are Charollais, Suffolk, and Texel. These breeds participated in sire referencing schemes from the early 1990s by sharing rams among flocks selected on the lean growth index. From 1999 to 2002 approximately 15 "high" and 15 "low" lean growth index score rams were selected from within their sire referencing schemes and mated to Welsh and Scottish Mule ewes. Their crossbred offspring were commercially reared on 3 farms in the UK. Lambs were finished to an estimated 11% subcutaneous fat by visual evaluation. At finishing, lambs were weighed, ultrasonically scanned, and assessed for condition score and conformation. Records were obtained for 6356 lambs on finishing BW (FWT), ultrasonic muscle depth (UMD), ultrasonic fat depth, overall condition score (OCS), and conformation of gigot, loin, and shoulder. Ultrasonic fat depth was log transformed (logUFD) to approach normality. High-index-sired lambs were heavier at finishing (1.2±0.2 kg) with thicker UMD (0.7±0.2 mm) and less logUFD (0.08±0.01 mm; P<0.05). There were no differences in OCS or conformation based on the sire index or breed (P>0.08). Suffolk-sired lambs were heavier than Charollais (1.0±0.3 kg), which were heavier than Texel (0.9±0.3 kg; P<0.001). Texel-sired lambs had thicker UMD than Charollais (0.7±0.2 mm; P<0.001) but were not different than Suffolk. Charollais-sired lambs had greater logUFD than both Texel (0.098±0.016 mm) and Suffolk (0.061±0.017 mm) sired lambs (P<0.001). Within a breed, high- and low-index-sired lambs differed in performance with the exceptions of FWT and UMD in Suffolks. Index selection produced heavier and leaner lambs at finishing. Producers have flexibility in choosing the terminal sire that best fits their production system.
Attenuation of ground-motion spectral amplitudes in southeastern Australia
Allen, T.I.; Cummins, P.R.; Dhu, T.; Schneider, J.F.
2007-01-01
A dataset comprising some 1200 weak- and strong-motion records from 84 earthquakes is compiled to develop a regional ground-motion model for southeastern Australia (SEA). Events were recorded from 1993 to 2004 and range in size from moment magnitude 2.0 ??? M ??? 4.7. The decay of vertical-component Fourier spectral amplitudes is modeled by trilinear geometrical spreading. The decay of low-frequency spectral amplitudes can be approximated by the coefficient of R-1.3 (where R is hypocentral distance) within 90 km of the seismic source. From approximately 90 to 160 km, we observe a transition zone in which the seismic coda are affected by postcritical reflections from midcrustal and Moho discontinuities. In this hypocentral distance range, geometrical spreading is approximately R+0.1. Beyond 160 km, low-frequency seismic energy attenuates rapidly with source-receiver distance, having a geometrical spreading coefficient of R-1.6. The associated regional seismic-quality factor can be expressed by the polynomial: log Q(f) = 3.66 - 1.44 log f + 0.768 (log f)2 + 0.058 (log f)3 for frequencies 0.78 ??? f ??? 19.9 Hz. Fourier spectral amplitudes, corrected for geometrical spreading and anelastic attenuation, are regressed with M to obtain quadratic source scaling coefficients. Modeled vertical-component displacement spectra fit the observed data well. Amplitude residuals are, on average, relatively small and do not vary with hypocentral distance. Predicted source spectra (i.e., at R = 1 km) are consistent with eastern North American (ENA) Models at low frequencies (f less than approximately 2 Hz) indicating that moment magnitudes calculated for SEA earthquakes are consistent with moment magnitude scales used in ENA over the observed magnitude range. The models presented represent the first spectral ground-motion prediction equations develooed for the southeastern Australian region. This work provides a useful framework for the development of regional ground-motion relations for earthquake hazard and risk assessment in SEA.
Paillet, Frederick L.; Hodges, Richard E.; Corland, Barbara S.
2002-01-01
This report presents and describes geophysical logs for six boreholes in Lariat Gulch, a topographic gulch at the former U.S. Air Force site PJKS in Jefferson County near Denver, Colorado. Geophysical logs include gamma, normal resistivity, fluid-column temperature and resistivity, caliper, televiewer, and heat-pulse flowmeter. These logs were run in two boreholes penetrating only the Fountain Formation of Pennsylvanian and Permian age (logged to depths of about 65 and 570 feet) and in four boreholes (logged to depths of about 342 to 742 feet) penetrating mostly the Fountain Formation and terminating in Precambrian crystalline rock, which underlies the Fountain Formation. Data from the logs were used to identify fractures and bedding planes and to locate the contact between the two formations. The logs indicated few fractures in the boreholes and gave no indication of higher transmissivity in the contact zone between the two formations. Transmissivities for all fractures in each borehole were estimated to be less than 2 feet squared per day.
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
An "ASYMPTOTIC FRACTAL" Approach to the Morphology of Malignant Cell Nuclei
NASA Astrophysics Data System (ADS)
Landini, Gabriel; Rippin, John W.
To investigate quantitatively nuclear membrane irregularity, 672 nuclei from 10 cases of oral cancer (squamous cell carcinoma) and normal cells from oral mucosa were studied in transmission electron micrographs. The nuclei were photographed at ×1400 magnification and transferred to computer memory (1 pixel = 35 nm). The perimeter of the profiles was analysed using the "yardstick method" of fractal dimension estimation, and the log-log plot of ruler size vs. boundary length demonstrated that there exists a significant effect of resolution on length measurement. However, this effect seems to disappear at higher resolutions. As this observation is compatible with the concept of asymptotic fractal, we estimated the parameters c, L and Bm from the asymptotic fractal formula Br = Bm {1 + (r / L)c}-1 , where Br is the boundary length measured with a ruler of size r, Bm is the maximum boundary for r → 0, L is a constant, and c = asymptotic fractal dimension minus topological dimension (D - Dt) for r → ∞. Analyses of variance showed c to be significantly higher in the normal than malignant cases (P < 0.001), but log(L) and Bm to be significantly higher in the malignant cases (P < 0.001). A multivariate linear discrimination analysis on c, log(L) and Bm re-classified 76.6% of the cells correctly (84.8% of the normal and 67.5% of the tumor). Furthermore, this shows that asymptotic fractal analysis applied to nuclear profiles has great potential for shape quantification in diagnosis of oral cancer.
NASA Astrophysics Data System (ADS)
Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel
2002-12-01
Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.
Log-Normality and Multifractal Analysis of Flame Surface Statistics
NASA Astrophysics Data System (ADS)
Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.
2013-11-01
The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
Comparison of formation and fluid-column logs in a heterogeneous basalt aquifer
Paillet, F.L.; Williams, J.H.; Oki, D.S.; Knutson, K.D.
2002-01-01
Deep observation boreholes in the vicinity of active production wells in Honolulu, Hawaii, exhibit the anomalous condition that fluid-column electrical conductivity logs and apparent profiles of pore-water electrical conductivity derived from induction conductivity logs are nearly identical if a formation factor of 12.5 is assumed. This condition is documented in three boreholes where fluid-column logs clearly indicate the presence of strong borehole flow induced by withdrawal from partially penetrating water-supply wells. This result appears to contradict the basic principles of conductivity-log interpretation. Flow conditions in one of these boreholes was investigated in detail by obtaining flow profiles under two water production conditions using the electromagnetic flowmeter. The flow-log interpretation demonstrates that the fluid-column log resembles the induction log because the amount of inflow to the borehole increases systematically upward through the transition zone between deeper salt water and shallower fresh water. This condition allows the properties of the fluid column to approximate the properties of water entering the borehole as soon as the upflow stream encounters that producing zone. Because this condition occurs in all three boreholes investigated, the similarity of induction and fluid-column logs is probably not a coincidence, and may relate to aquifer response under the influence of pumping from production wells.
Comparison of formation and fluid-column logs in a heterogeneous basalt aquifer.
Paillet, F L; Williams, J H; Oki, D S; Knutson, K D
2002-01-01
Deep observation boreholes in the vicinity of active production wells in Honolulu, Hawaii, exhibit the anomalous condition that fluid-column electrical conductivity logs and apparent profiles of pore-water electrical conductivity derived from induction conductivity logs are nearly identical if a formation factor of 12.5 is assumed. This condition is documented in three boreholes where fluid-column logs clearly indicate the presence of strong borehole flow induced by withdrawal from partially penetrating water-supply wells. This result appears to contradict the basic principles of conductivity-log interpretation. Flow conditions in one of these boreholes was investigated in detail by obtaining flow profiles under two water production conditions using the electromagnetic flowmeter. The flow-log interpretation demonstrates that the fluid-column log resembles the induction log because the amount of inflow to the borehole increases systematically upward through the transition zone between deeper salt water and shallower fresh water. This condition allows the properties of the fluid column to approximate the properties of water entering the borehole as soon as the upflow stream encounters that producing zone. Because this condition occurs in all three boreholes investigated, the similarity of induction and fluid-column logs is probably not a coincidence, and may relate to aquifer response under the influence of pumping from production wells.
Opportunities to use bark polyphenols in specialty chemical markets
Richard W. Hemingway
1998-01-01
Current forestry practice in North America is to transport pulpwood and logs from the harvest site to the mill with the bark on the wood. Approximately 18 percent of the weight of logs from conifers such as southern pine is bark. The majority of this bark is burned as hog fuel, but its fuel value is low. When compared with natural gas at an average of $2.50/MBTU or...
Gudimetla, V S Rao; Holmes, Richard B; Smith, Carey; Needham, Gregory
2012-05-01
The effect of anisotropic Kolmogorov turbulence on the log-amplitude correlation function for plane-wave fields is investigated using analysis, numerical integration, and simulation. A new analytical expression for the log-amplitude correlation function is derived for anisotropic Kolmogorov turbulence. The analytic results, based on the Rytov approximation, agree well with a more general wave-optics simulation based on the Fresnel approximation as well as with numerical evaluations, for low and moderate strengths of turbulence. The new expression reduces correctly to previously published analytic expressions for isotropic turbulence. The final results indicate that, as asymmetry becomes greater, the Rytov variance deviates from that given by the standard formula. This deviation becomes greater with stronger turbulence, up to moderate turbulence strengths. The anisotropic effects on the log-amplitude correlation function are dominant when the separation of the points is within the Fresnel length. In the direction of stronger turbulence, there is an enhanced dip in the correlation function at a separation close to the Fresnel length. The dip is diminished in the weak-turbulence axis, suggesting that energy redistribution via focusing and defocusing is dominated by the strong-turbulence axis. The new analytical expression is useful when anisotropy is observed in relevant experiments. © 2012 Optical Society of America
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Determining inert content in coal dust/rock dust mixture
Sapko, Michael J.; Ward, Jr., Jack A.
1989-01-01
A method and apparatus for determining the inert content of a coal dust and rock dust mixture uses a transparent window pressed against the mixture. An infrared light beam is directed through the window such that a portion of the infrared light beam is reflected from the mixture. The concentration of the reflected light is detected and a signal indicative of the reflected light is generated. A normalized value for the generated signal is determined according to the relationship .phi.=(log i.sub.c `log i.sub.co) / (log i.sub.c100 -log i.sub.co) where i.sub.co =measured signal at 0% rock dust i.sub.c100 =measured signal at 100% rock dust i.sub.c =measured signal of the mixture. This normalized value is then correlated to a predetermined relationship of .phi. to rock dust percentage to determine the rock dust content of the mixture. The rock dust content is displayed where the percentage is between 30 and 100%, and an indication of out-of-range is displayed where the rock dust percent is less than 30%. Preferably, the rock dust percentage (RD%) is calculated from the predetermined relationship RD%=100+30 log .phi.. where the dust mixture initially includes moisture, the dust mixture is dried before measuring by use of 8 to 12 mesh molecular-sieves which are shaken with the dust mixture and subsequently screened from the dust mixture.
Pei Li; Jing He; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper analyses computed tomography (CT) images of hardwood logs, with the goal of locating internal defects. The ability to detect and identify defects automatically is a critical component of efficiency improvements for future sawmills and veneer mills. This paper describes an approach in which 1) histogram equalization is used during preprocessing to normalize...
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Gebo, Norman; Rowland, John
1988-01-01
In this effort are described cumulative rain rate distributions for a network of nine tipping bucket rain gauge systems located in the mid-Atlantic coast region in the vicinity of the NASA Wallops Flight Facility, Wallops Island, Virginia. The rain gauges are situated within a gridded region of dimensions of 47 km east-west by 70 km north-south. Distributions are presented for the individual site measurements and the network average for the year period June 1, 1986 through May 31, 1987. A previous six year average distribution derived from measurements at one of the site locations is also presented. Comparisons are given of the network average, the CCIR (International Radio Consultative Committee) climatic zone, and the CCIR functional model distributions, the latter of which approximates a log normal at the lower rain rate and a gamma function at the higher rates.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
The Italian primary school-size distribution and the city-size: a complex nexus
NASA Astrophysics Data System (ADS)
Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.
2014-06-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.
Immiscible impact dynamics of droplets onto millimetric films
NASA Astrophysics Data System (ADS)
Shaikh, S.; Toyofuku, G.; Hoang, R.; Marston, J. O.
2018-01-01
The impact of liquid droplets onto a film of an immiscible liquid is studied experimentally across a broad range of parameters [Re = O(101-103), We = O(102-103)] with the aid of high-speed photography and image analysis. Above a critical impact parameter, Re^{1/2}We^{1/4} ≈ 100, the droplet fragments into multiple satellite droplets, which typically occurs as the result of a fingering instability. Statistical analysis indicates that the satellite droplets are approximately log-normally distributed, in agreement with some previous studies and the theoretical predictions of Wu (Prob Eng Mech 18:241-249, 2003). However, in contrast to a recent study by Lhuissier et al. (Phys Rev Lett 110:264503, 2013), we find that it is the modal satellite diameter, not the mean diameter, that scales inversely with the impact speed (or Weber number) and that the dependence is d_{mod} ˜ We^{-1/4}.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
NASA Technical Reports Server (NTRS)
1977-01-01
ELDEC Corp., Lynwood, Wash., built a weight-recording system for logging trucks based on electronic technology the company acquired as a subcontractor on space programs such as Apollo and the Saturn launch vehicle. ELDEC employed its space-derived expertise to develop a computerized weight-and-balance system for Lockheed's TriStar jetliner. ELDEC then adapted the airliner system to a similar product for logging trucks. Electronic equipment computes tractor weight, trailer weight and overall gross weight, and this information is presented to the driver by an instrument in the cab. The system costs $2,000 but it pays for itself in a single year. It allows operators to use a truck's hauling capacity more efficiently since the load can be maximized without exceeding legal weight limits for highway travel. Approximately 2,000 logging trucks now use the system.
Geophysical Log Database for the Mississippi Embayment Regional Aquifer Study (MERAS)
Hart, Rheannon M.; Clark, Brian R.
2008-01-01
The Mississippi Embayment Regional Aquifer Study (MERAS) is an investigation of ground-water availability and sustainability within the Mississippi embayment as part of the U.S. Geological Survey Ground-Water Resources Program. The MERAS area consists of approximately 70,000 square miles and encompasses parts of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. More than 2,600 geophysical logs of test holes and wells within the MERAS area were compiled into a database and were used to develop a digital hydrogeologic framework from land surface to the top of the Midway Group of upper Paleocene age. The purpose of this report is to document, present, and summarize the geophysical log database, as well as to preserve the geophysical logs in a digital image format for online access.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Monocular oral reading after treatment of dense congenital unilateral cataract
Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.
2010-01-01
Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057
Diaper area skin microflora of normal children and children with atopic dermatitis.
Keswick, B H; Seymour, J L; Milligan, M C
1987-01-01
In vitro studies established that neither cloth nor disposable diapers demonstrably contributed to the growth of Escherichia coli, Proteus vulgaris, Staphylococcus aureus, or Candida albicans when urine was present as a growth medium. In a clinical study of 166 children, the microbial skin flora of children with atopic dermatitis was compared with the flora of children with normal skin to determine the influence of diaper type. No biologically significant differences were detected between groups wearing disposable or cloth diapers in terms of frequency of isolation or log mean recovery of selected skin flora. Repeated isolation of S. aureus correlated with atopic dermatitis. The log mean recovery of S. aureus was higher in the atopic groups. The effects of each diaper type on skin microflora were equivalent in the normal and atopic populations. PMID:3546360
Stick-slip behavior in a continuum-granular experiment.
Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott
2015-12-01
We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.
Castro-Rosas, Javier; Santos López, Eva María; Gómez-Aldapa, Carlos Alberto; González Ramírez, Cesar Abelardo; Villagomez-Ibarra, José Roberto; Gordillo-Martínez, Alberto José; López, Angélica Villarruel; del Refugio Torres-Vitela, M
2010-08-01
The incidence of coliform bacteria (CB), thermotolerant coliforms (TC), Escherichia coli, and Salmonella was determined for zucchini squash fruit. In addition, the behavior of four serotypes of Salmonella and a cocktail of three E. coli strains on whole and sliced zucchini squash at 25+/-2 degrees C and 3 to 5 degrees C was tested. Squash fruit was collected in the markets of Pachuca city, Hidalgo State, Mexico. CB, TC, E. coli, and Salmonella were detected in 100, 70, 62, and 10% of the produce, respectively. The concentration ranged from 3.8 to 7.4 log CFU per sample for CB, and >3 to 1,100 most probable number per sample for TC and E. coli. On whole fruit stored at 25+/-2 degrees C or 3 to 5 degrees C, no growth was observed for any of the tested microorganisms or cocktails thereof. After 15 days at 25+/-2 degrees C, the tested Salmonella serotypes had decreased from an initial inoculum level of 7 log CFU to <1 log, and at 3 to 5 degrees C they decreased to approximately 2 log. Survival of E. coli was significantly greater than for the Salmonella strains at the same times and temperatures; after 15 days, at 25+/-2 degrees C E. coli cocktail strains had decreased to 3.4 log CFU per fruit and at 3 to 5 degrees C they decreased to 3.6 log CFU per fruit. Both the Salmonella serotypes and E. coli strains grew when inoculated onto sliced squash: after 24 h at 25+/-2 degrees C, both bacteria had grown to approximately 6.5 log CFU per slice. At 3 to 5 degrees C, the bacterial growth was inhibited. The squash may be an important factor contributing to the endemicity of Salmonella in Mexico.
Moulin, Bertrand; Schenk, Edward R.; Hupp, Cliff R.
2011-01-01
A 177 river km georeferenced aerial survey of in-channel large wood (LW) on the lower Roanoke River, NC was conducted to determine LW dynamics and distributions on an eastern USA low-gradient large river. Results indicate a system with approximately 75% of the LW available for transport either as detached individual LW or as LW in log jams. There were approximately 55 individual LW per river km and another 59 pieces in log jams per river km. Individual LW is a product of bank erosion (73% is produced through erosion) and is isolated on the mid and upper banks at low flow. This LW does not appear to be important for either aquatic habitat or as a human risk. Log jams rest near or at water level making them a factor in bank complexity in an otherwise homogenous fine-grained channel. A segmentation test was performed using LW frequency by river km to detect breaks in longitudinal distribution and to define homogeneous reaches of LWfrequency. Homogeneous reaches were then analyzed to determine their relationship to bank height, channel width/depth, sinuosity, and gradient. Results show that log jams are a product of LW transport and occur more frequently in areas with high snag concentrations, low to intermediate bank heights, high sinuosity, high local LW recruitment rates, and narrow channel widths. The largest concentration of log jams (21.5 log jams/km) occurs in an actively eroding reach. Log jam concentrations downstream of this reach are lower due to a loss of river competency as the channel reaches sea level and the concurrent development of unvegetated mudflats separating the active channel from the floodplain forest. Substantial LW transport occurs on this low-gradient, dam-regulated large river; this study, paired with future research on transport mechanisms should provide resource managers and policymakers with options to better manage aquatic habitat while mitigating possible negative impacts to human interests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattler, A.R.
1996-04-01
Six boreholes were drilled during the geologic characterization and diagnostics of the Weeks Island sinkhole that is over the two-tiered salt mine which was converted for oil storage by the US Strategic Petroleum Reserve. These holes were drilled to provide for geologic characterization of the Weeks Island Salt Dome and its overburden in the immediate vicinity of the sinkhole (mainly through logs and core); to establish a crosswell configuration for seismic tomography; to establish locations for hydrocarbon detection and tracer injection; and to Provide direct observations of sinkhole geometry and material properties. Specific objectives of the logging program were to:more » (1) identify the top of and the physical state of the salt dome; (2) identify the water table; (3) obtain a relative salinity profile in the aquifer within the alluvium, which ranges from the water table directly to the top of the Weeks Island salt dome; and (4) identify a reflecting horizon seen on seismic profiles over this salt dome. Natural gamma, neutron, density, sonic, resistivity and caliper logs were run. Neutron and density logs were run from inside the well casing because of the extremely unstable condition of the deltaic alluvium overburden above the salt dome. The logging program provided important information about the salt dome and the overburden in that (1) the top of the salt dome was identified at {approximately}189 ft bgl (103 ft msl), and the top of the dome contains relatively few fractures; (2) the water table is approximately 1 ft msl, (3) this aquifer appears to become steadily more saline with depth; and (4) the water saturation of much of the alluvium over the salt dome is shown to be influenced by the prevalent heavy rainfall. This logging program, a part of the sinkhole diagnostics, provides unique information about this salt dome and the overburden.« less
Wright, Kathryn M; Holden, Nicola J
2018-05-20
Microgreens are edible plants used in food preparation for their appealing flavours and colours. They are grown beyond the point of harvest of sprouted seeds, and normally include the cotyledons and first true leaves. Their method of production is similar to sprouted seeds, which is known to be favourable for growth of microbial pathogens, although there is little data on the potential of food-borne pathogens such as Shigatoxigenic Escherichia coli (STEC) to colonise these plants. We found colonisation of nine different species of microgreen plants by STEC (isolate Sakai, stx-), with high levels of growth over five days, of approximately 5 orders of magnitude, for plants propagated at 21 °C. STEC (Sakai) formed extensive colonies on external tissue, with some evidence for internalisation via stomatal pores. Several factors impacted the level of colonisation: (1) plant tissue type such that for broccoli microgreens, the highest levels of STEC (Sakai) occurred on cotyledons compared to the true leaf and hypocotyl; (2) the route of contamination such that higher levels occurred with contaminated irrigation water compared to direct seed contamination; (3) inoculation dose, although only at low levels of inoculation (3 log 10 ) compared to medium (5 log 10 ) or high (7 log 10 ) levels; (4) environmental factors, including to some extent humidity, but also plant growth substrate types. It was also evident that a starvation response was induced in STEC (Sakai) in low-nutrient plant irrigation medium. Together these data show that microgreens represent a potential hazard of contamination by food-borne pathogens, and to mitigate the risk, they should be considered in the same manner as sprouted seeds. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Statistical analysis of variability properties of the Kepler blazar W2R 1926+42
NASA Astrophysics Data System (ADS)
Li, Yutong; Hu, Shaoming; Wiita, Paul J.; Gupta, Alok C.
2018-04-01
We analyzed Kepler light curves of the blazar W2R 1926+42 that provided nearly continuous coverage from quarter 11 through quarter 17 (589 days between 2011 and 2013) and examined some of their flux variability properties. We investigate the possibility that the light curve is dominated by a large number of individual flares and adopt exponential rise and decay models to investigate the symmetry properties of flares. We found that those variations of W2R 1926+42 are predominantly asymmetric with weak tendencies toward positive asymmetry (rapid rise and slow decay). The durations (D) and the amplitudes (F0) of flares can be fit with log-normal distributions. The energy (E) of each flare is also estimated for the first time. There are positive correlations between logD and logE with a slope of 1.36, and between logF0 and logE with a slope of 1.12. Lomb-Scargle periodograms are used to estimate the power spectral density (PSD) shape. It is well described by a power law with an index ranging between -1.1 and -1.5. The sizes of the emission regions, R, are estimated to be in the range of 1.1 × 1015cm - 6.6 × 1016cm. The flare asymmetry is difficult to explain by a light travel time effect but may be caused by differences between the timescales for acceleration and dissipation of high-energy particles in the relativistic jet. A jet-in-jet model also could produce the observed log-normal distributions.
Study of Magnitudes, Seismicity and Earthquake Detectability Using a Global Network
1984-06-01
Ings. the stations are then further classified into three groups: j . A Stations reporting a P-detection with an associated log(A/ T ) value (V...detections, nondetections and reported log(A/ T ) values for the j’th event. given that Its true magnitude is/u j . where t .-. ( + j )1J jL (4) Aj~~ -.Q’ +Qj...subset for which I., ’. % , ILriLI-W I tT W W9 .11mtl’%r’ vl.:", Q -A -Pk,7,W-W ’ dW .P ,[ J -4-- log(A/ T ) was reported. As a first order approximation
Predmore, Ashley; Li, Jianrong
2011-01-01
Fruits and vegetables are major vehicles for transmission of food-borne enteric viruses since they are easily contaminated at pre- and postharvest stages and they undergo little or no processing. However, commonly used sanitizers are relatively ineffective for removing human norovirus surrogates from fresh produce. In this study, we systematically evaluated the effectiveness of surfactants on removal of a human norovirus surrogate, murine norovirus 1 (MNV-1), from fresh produce. We showed that a panel of surfactants, including sodium dodecyl sulfate (SDS), Nonidet P-40 (NP-40), Triton X-100, and polysorbates, significantly enhanced the removal of viruses from fresh fruits and vegetables. While tap water alone and chlorine solution (200 ppm) gave only <1.2-log reductions in virus titer in all fresh produce, a solution containing 50 ppm of surfactant was able to achieve a 3-log reduction in virus titer in strawberries and an approximately 2-log reduction in virus titer in lettuce, cabbage, and raspberries. Moreover, a reduction of approximately 3 logs was observed in all the tested fresh produce after sanitization with a solution containing a combination of 50 ppm of each surfactant and 200 ppm of chlorine. Taken together, our results demonstrate that the combination of a surfactant with a commonly used sanitizer enhanced the efficiency in removing viruses from fresh produce by approximately 100 times. Since SDS is an FDA-approved food additive and polysorbates are recognized by the FDA as GRAS (generally recognized as safe) products, implementation of this novel sanitization strategy would be a feasible approach for efficient reduction of the virus load in fresh produce. PMID:21622782
Measuring Resistance to Change at the Within-Session Level
Tonneau, François; Ríos, Américo; Cabrera, Felipe
2006-01-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results. PMID:16903495
Measuring resistance to change at the within-session level.
Tonneau, François; Ríos, Américo; Cabrera, Felipe
2006-07-01
Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results.
Potential health effects of indoor radon exposure.
Radford, E P
1985-01-01
Radon-222 is a ubiquitous noble gas arising from decay of radium-226 normally present in the earth's crust. Alpha radiation from inhaled short-lived daughters of radon readily irradiates human bronchial epithelium, and there is now good evidence of excess risk of lung cancer in underground miners exposed to higher concentrations. In homes, radon levels are highly variable, showing approximately log-normal distributions and often a small fraction of homes with high concentrations of radon and radon daughters. Factors affecting indoor concentrations include type of bedrock under dwellings, house foundation characteristics, radon dissolved in artesian water, and ventilation and degree of air movement in living spaces. Despite much recent work, exposures to radon daughters by the general public are not well defined. From application of risk assessments in miners to home conditions, it appears that about 25% or more of lung cancers among nonsmokers over the age of 60, and about 5% in smokers, may be attributable to exposure to radon daughters at home. It may be necessary to take remedial action to reduce this hazard in those dwellings with elevated levels of radon, and new construction should take account of this problem. PMID:4085431
Rapid pupil-based assessment of glaucomatous damage.
Chen, Yanjun; Wyatt, Harry J; Swanson, William H; Dul, Mitchell W
2008-06-01
To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the "contrast balance" (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes.
Rapid Pupil-Based Assessment of Glaucomatous Damage
Chen, Yanjun; Wyatt, Harry J.; Swanson, William H.; Dul, Mitchell W.
2010-01-01
Purpose To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Methods Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the “contrast balance” (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Conclusions Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes. PMID:18521026
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
Sprenger, C; Lorenzen, G; Grunert, A; Ronghang, M; Dizer, H; Selinka, H-C; Girones, R; Lopez-Pila, J M; Mittal, A K; Szewzyk, R
2014-06-01
Emerging countries frequently afflicted by waterborne diseases require safe and cost-efficient production of drinking water, a task that is becoming more challenging as many rivers carry a high degree of pollution. A study was conducted on the banks of the Yamuna River, Delhi, India, to ascertain if riverbank filtration (RBF) can significantly improve the quality of the highly polluted surface water in terms of virus removal (coliphages, enteric viruses). Human adenoviruses and noroviruses, both present in the Yamuna River in the range of 10(5) genomes/100 mL, were undetectable after 50 m infiltration and approximately 119 days of underground passage. Indigenous somatic coliphages, used as surrogates of human pathogenic viruses, underwent approximately 5 log10 removal after only 3.8 m of RBF. The initial removal after 1 m was 3.3 log10, and the removal between 1 and 2.4 m and between 2.4 and 3.8 m was 0.7 log10 each. RBF is therefore an excellent candidate to improve the water situation in emerging countries with respect to virus removal.
Power laws in microrheology experiments on living cells: Comparative analysis and modeling.
Balland, Martial; Desprat, Nicolas; Icard, Delphine; Féréol, Sophie; Asnacios, Atef; Browaeys, Julien; Hénon, Sylvie; Gallet, François
2006-08-01
We compare and synthesize the results of two microrheological experiments on the cytoskeleton of single cells. In the first one, the creep function J(t) of a cell stretched between two glass plates is measured after applying a constant force step. In the second one, a microbead specifically bound to transmembrane receptors is driven by an oscillating optical trap, and the viscoelastic coefficient Ge(omega) is retrieved. Both J(t) and Ge(omega) exhibit power law behaviors: J(t) = A0(t/t0)alpha and absolute value (Ge(omega)) = G0(omega/omega0)alpha, with the same exponent alpha approximately 0.2. This power law behavior is very robust; alpha is distributed over a narrow range, and shows almost no dependence on the cell type, on the nature of the protein complex which transmits the mechanical stress, nor on the typical length scale of the experiment. On the contrary, the prefactors A0 and G0 appear very sensitive to these parameters. Whereas the exponents alpha are normally distributed over the cell population, the prefactors A0 and G0 follow a log-normal repartition. These results are compared with other data published in the literature. We propose a global interpretation, based on a semiphenomenological model, which involves a broad distribution of relaxation times in the system. The model predicts the power law behavior and the statistical repartition of the mechanical parameters, as experimentally observed for the cells. Moreover, it leads to an estimate of the largest response time in the cytoskeletal network: tau(m) approximately 1000 s.
Impact of logging on aboveground biomass stocks in lowland rain forest, Papua New Guinea.
Bryan, Jane; Shearman, Phil; Ash, Julian; Kirkpatrick, J B
2010-12-01
Greenhouse-gas emissions resulting from logging are poorly quantified across the tropics. There is a need for robust measurement of rain forest biomass and the impacts of logging from which carbon losses can be reliably estimated at regional and global scales. We used a modified Bitterlich plotless technique to measure aboveground live biomass at six unlogged and six logged rain forest areas (coupes) across two approximately 3000-ha regions at the Makapa concession in lowland Papua New Guinea. "Reduced-impact logging" is practiced at Makapa. We found the mean unlogged aboveground biomass in the two regions to be 192.96 +/- 4.44 Mg/ha and 252.92 +/- 7.00 Mg/ha (mean +/- SE), which was reduced by logging to 146.92 +/- 4.58 Mg/ha and 158.84 +/- 4.16, respectively. Killed biomass was not a fixed proportion, but varied with unlogged biomass, with 24% killed in the lower-biomass region, and 37% in the higher-biomass region. Across the two regions logging resulted in a mean aboveground carbon loss of 35 +/- 2.8 Mg/ha. The plotless technique proved efficient at estimating mean aboveground biomass and logging damage. We conclude that substantial bias is likely to occur within biomass estimates derived from single unreplicated plots.
NASA Astrophysics Data System (ADS)
Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain
2017-07-01
Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.
Control of Listeria monocytogenes growth in soft cheeses by bacteriophage P100.
Silva, Elaine Nóbrega Gibson; Figueiredo, Ana Cláudia Leite; Miranda, Fernanda Araújo; de Castro Almeida, Rogeria Comastri
2014-01-01
The purpose of this study was to determine the effect of bacteriophage P100 on strains of Listeria monocytogenes in artificially inoculated soft cheeses. A mix of L. monocytogenes 1/2a and Scott A was inoculated in Minas Frescal and Coalho cheeses (approximately 10(5) cfu/g) with the bacteriophage added thereafter (8.3 × 10(7) PFU/g). Samples were analyzed immediately, and then stored at 10 °C for seven days. At time zero, 30 min post-infection, the bacteriophage P100 reduced L. monocytogenes counts by 2.3 log units in Minas Frescal cheese and by 2.1 log units in Coalho cheese, compared to controls without bacteriophage. However, in samples stored under refrigeration for seven days, the bacteriophage P100 was only weakly antilisterial, with the lowest decimal reduction (DR) for the cheeses: 1.0 log unit for Minas Frescal and 0.8 log units for Coalho cheese. The treatment produced a statistically significant decrease in the counts of viable cells (p < 0.05) and in all assays performed, we observed an increase of approximately one log cycle in the number of viable cells of L. monocytogenes in the samples under refrigeration for seven days. Moreover, a smaller effect of phages was observed. These results, along with other published data, indicate that the effectiveness of the phage treatment depends on the initial concentration of L. monocytogenes, and that a high concentration of phages per unit area is required to ensure sustained inactivation of target pathogens on food surfaces.
Sanitizing in Dry-Processing Environments Using Isopropyl Alcohol Quaternary Ammonium Formula.
Kane, Deborah M; Getty, Kelly J K; Mayer, Brian; Mazzotta, Alejandro
2016-01-01
Dry-processing environments are particularly challenging to clean and sanitize because introduced water can favor growth and establishment of pathogenic microorganisms such as Salmonella. Our objective was to determine the efficacy of an isopropyl alcohol quaternary ammonium (IPAQuat) formula for eliminating potential Salmonella contamination on food contact surfaces. Clean stainless steel coupons and conveyor belt materials used in dry-processing environments were spot inoculated in the center of coupons (5 by 5 cm) with a six-serotype composite of Salmonella (approximately 10 log CFU/ml), subjected to IPAQuat sanitizer treatments with exposure times of 30 s, 1 min, or 5 min, and then swabbed for enumeration of posttreatment survivors. A subset of inoculated surfaces was soiled with a breadcrumb-flour blend and allowed to sit on the laboratory bench for a minimum of 16 h before sanitation. Pretreatment Salmonella populations (inoculated controls, 0 s treatment) were approximately 7.0 log CFU/25 cm(2), and posttreatment survivors were 1.31, 0.72, and < 0.7 (detection limit) log CFU/25 cm(2) after sanitizer exposure for 30 s, 1 min, or 5 min, respectively, for both clean (no added soil) and soiled surfaces. Treatment with the IPAQuat formula using 30-s sanitizer exposures resulted in 5.68-log reductions, whereas >6.0-log reductions were observed for sanitizer exposures of 1 and 5 min. Because water is not introduced into the processing environment with this approach, the IPAQuat formula could have sanitation applications in dry-processing environments to eliminate potential contamination from Salmonella on food contact surfaces.
Estimation of Renyi exponents in random cascades
Troutman, Brent M.; Vecchia, Aldo V.
1999-01-01
We consider statistical estimation of the Re??nyi exponent ??(h), which characterizes the scaling behaviour of a singular measure ?? defined on a subset of Rd. The Re??nyi exponent is defined to be lim?????0 [{log M??(h)}/(-log ??)], assuming that this limit exists, where M??(h) = ??i??h(??i) and, for ??>0, {??i} are the cubes of a ??-coordinate mesh that intersect the support of ??. In particular, we demonstrate asymptotic normality of the least-squares estimator of ??(h) when the measure ?? is generated by a particular class of multiplicative random cascades, a result which allows construction of interval estimates and application of hypothesis tests for this scaling exponent. Simulation results illustrating this asymptotic normality are presented. ?? 1999 ISI/BS.
Medium Access Control for Opportunistic Concurrent Transmissions under Shadowing Channels
Son, In Keun; Mao, Shiwen; Hur, Seung Min
2009-01-01
We study the problem of how to alleviate the exposed terminal effect in multi-hop wireless networks in the presence of log-normal shadowing channels. Assuming node location information, we propose an extension of the IEEE 802.11 MAC protocol that sched-ules concurrent transmissions in the presence of log-normal shadowing, thus mitigating the exposed terminal problem and improving network throughput and delay performance. We observe considerable improvements in throughput and delay achieved over the IEEE 802.11 MAC under various network topologies and channel conditions in ns-2 simulations, which justify the importance of considering channel randomness in MAC protocol design for multi-hop wireless networks. PMID:22408556
Characterization of the spatial variability of channel morphology
Moody, J.A.; Troutman, B.M.
2002-01-01
The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.
Karhunen Loève approximation of random fields by generalized fast multipole methods
NASA Astrophysics Data System (ADS)
Schwab, Christoph; Todor, Radu Alexandru
2006-09-01
KL approximation of a possibly instationary random field a( ω, x) ∈ L2( Ω, d P; L∞( D)) subject to prescribed meanfield Ea(x)=∫a(ω,x) dP(ω) and covariance Va(x,x')=∫(a(ω,x)-Ea(x))(a(ω,x')-Ea(x')) dP(ω) in a polyhedral domain D⊂Rd is analyzed. We show how for stationary covariances Va( x, x') = ga(| x - x'|) with ga( z) analytic outside of z = 0, an M-term approximate KL-expansion aM( ω, x) of a( ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances Ca. It involves Galerkin approximation of the KL eigenvalue problem by discontinuous finite elements of degree p ⩾ 0 on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized fast multipole accelerated Krylov-Eigensolver. The approximate KL-expansion aM( x, ω) of a( x, ω) has accuracy O(exp(- bM1/ d)) if ga is analytic at z = 0 and accuracy O( M- k/ d) if ga is Ck at zero. It is obtained in O( MN(log N) b) operations where N = O( h- d).
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Fracture identification based on remote detection acoustic reflection logging
NASA Astrophysics Data System (ADS)
Zhang, Gong; Li, Ning; Guo, Hong-Wei; Wu, Hong-Liang; Luo, Chao
2015-12-01
Fracture identification is important for the evaluation of carbonate reservoirs. However, conventional logging equipment has small depth of investigation and cannot detect rock fractures more than three meters away from the borehole. Remote acoustic logging uses phase-controlled array-transmitting and long sound probes that increase the depth of investigation. The interpretation of logging data with respect to fractures is typically guided by practical experience rather than theory and is often ambiguous. We use remote acoustic reflection logging data and high-order finite-difference approximations in the forward modeling and prestack reverse-time migration to image fractures. First, we perform forward modeling of the fracture responses as a function of the fracture-borehole wall distance, aperture, and dip angle. Second, we extract the energy intensity within the imaging area to determine whether the fracture can be identified as the formation velocity is varied. Finally, we evaluate the effect of the fracture-borehole distance, fracture aperture, and dip angle on fracture identification.
NASA Astrophysics Data System (ADS)
Tian, K.; Gosvami, N. N.; Goldsby, D. L.; Carpick, R. W.
2015-12-01
Rate and state friction (RSF) laws are empirical relationships that describe the frictional behavior of rocks and other materials in experiments, and reproduce a variety of observed natural behavior when employed in earthquake models. A pervasive observation from rock friction experiments is the linear increase of static friction with the log of contact time, or 'ageing'. Ageing is usually attributed to an increase in real area of contact associated with asperity creep. However, recent atomic force microscopy (AFM) experiments demonstrate that ageing of nanoscale silica-silica contacts is due to progressive formation of interfacial chemical bonds in the absence of plastic deformation, in a manner consistent with the multi-contact ageing behavior of rocks [Li et al., 2011]. To further investigate chemical bonding-induced ageing, we explored the influence of normal load (and thus contact normal stress) and contact time on ageing. Experiments that mimic slide-hold-slide rock friction experiments were conducted in the AFM for contact loads and hold times ranging from 23 to 393 nN and 0.1 to 100 s, respectively, all in humid air (~50% RH) at room temperature. Experiments were conducted by sequentially sliding the AFM tip on the sample at a velocity V of 0.5 μm/s, setting V to zero and holding the tip stationary for a given time, and finally resuming sliding at 0.5 μm/s to yield a peak value of friction followed by a drop to the sliding friction value. Chemical bonding-induced ageing, as measured by the peak friction minus the sliding friction, increases approximately linearly with the product of normal load and the log of the hold time. Theoretical studies of the roles of reaction energy barriers in nanoscale ageing indicate that frictional ageing depends on the total number of reaction sites and the hold time [Liu & Szlufarska, 2012]. We combine chemical kinetics analyses with contact mechanics models to explain our results, and develop a new approach for curve fitting ageing vs. load data which shows that the friction drop data points all fall on a master curve. The analysis yields physically reasonable values for the activation energy and activation volume of the chemical bonding process. Our study provides a basis to hypothesize that the kinetic processes in chemical bonding-induced ageing do not depend strongly on normal load.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric
2007-01-01
A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Effects of reduced-impact logging on fish assemblages in central Amazonia.
Dias, Murilo S; Magnusson, William E; Zuanon, Jansen
2010-02-01
In Amazonia reduced-impact logging, which is meant to reduce environmental disturbance by controlling stem-fall directions and minimizing construction of access roads, has been applied to large areas containing thousands of streams. We investigated the effects of reduced-impact logging on environmental variables and the composition of fish in forest streams in a commercial logging concession in central Amazonia, Amazonas State, Brazil. To evaluate short-term effects, we sampled 11 streams before and after logging in one harvest area. We evaluated medium-term effects by comparing streams in 11 harvest areas logged 1-8 years before the study with control streams in adjacent areas. Each sampling unit was a 50-m stream section. The tetras Pyrrhulina brevis and Hemigrammus cf. pretoensis had higher abundances in plots logged > or =3 years before compared with plots logged <3 years before. The South American darter (Microcharacidium eleotrioides) was less abundant in logged plots than in control plots. In the short term, the overall fish composition did not differ two months before and immediately after reduced-impact logging. Temperature and pH varied before and after logging, but those differences were compatible with normal seasonal variation. In the medium term, temperature and cover of logs were lower in logged plots. Differences in ordination scores on the basis of relative fish abundance between streams in control and logged areas changed with time since logging, mainly because some common species increased in abundance after logging. There was no evidence of species loss from the logging concession, but differences in log cover and ordination scores derived from relative abundance of fish species persisted even after 8 years. For Amazonian streams, reduced-impact logging appears to be a viable alternative to clear-cut practices, which severely affect aquatic communities. Nevertheless, detailed studies are necessary to evaluated subtle long-term effects.
Shin, Jung-Hyun; Eom, Tae-Hoon; Kim, Young-Hoon; Chung, Seung-Yun; Lee, In-Goo; Kim, Jung-Min
2017-07-01
Valproate (VPA) is an antiepileptic drug (AED) used for initial monotherapy in treating childhood absence epilepsy (CAE). EEG might be an alternative approach to explore the effects of AEDs on the central nervous system. We performed a comparative analysis of background EEG activity during VPA treatment by using standardized, low-resolution, brain electromagnetic tomography (sLORETA) to explore the effect of VPA in patients with CAE. In 17 children with CAE, non-parametric statistical analyses using sLORETA were performed to compare the current density distribution of four frequency bands (delta, theta, alpha, and beta) between the untreated and treated condition. Maximum differences in current density were found in the left inferior frontal gyrus for the delta frequency band (log-F-ratio = -1.390, P > 0.05), the left medial frontal gyrus for the theta frequency band (log-F-ratio = -0.940, P > 0.05), the left inferior frontal gyrus for the alpha frequency band (log-F-ratio = -0.590, P > 0.05), and the left anterior cingulate for the beta frequency band (log-F-ratio = -1.318, P > 0.05). However, none of these differences were significant (threshold log-F-ratio = ±1.888, P < 0.01; threshold log-F-ratio = ±1.722, P < 0.05). Because EEG background is accepted as normal in CAE, VPA would not be expected to significantly change abnormal thalamocortical oscillations on a normal EEG background. Therefore, our results agree with currently accepted concepts but are not consistent with findings in some previous studies.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Bowker, Matthew A.; Maestre, Fernando T.
2012-01-01
Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.
Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar
NASA Technical Reports Server (NTRS)
Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.
Schlottmann, Jamie L.; Funkhouser, Ron A.
1991-01-01
Chemical analyses of water from eight test holes and geophysical logs for nine test holes drilled in the Central Oklahoma aquifer are presented. The test holes were drilled to investigate local occurrences of potentially toxic, naturally occurring trace substances in ground water. These trace substances include arsenic, chromium, selenium, residual alpha-particle activities, and uranium. Eight of the nine test holes were drilled near wells known to contain large concentrations of one or more of the naturally occurring trace substances. One test hole was drilled in an area known to have only small concentrations of any of the naturally occurring trace substances.Water samples were collected from one to eight individual sandstone layers within each test hole. A total of 28 water samples, including four duplicate samples, were collected. The temperature, pH, specific conductance, alkalinity, and dissolved-oxygen concentrations were measured at the sample site. Laboratory determinations included major ions, nutrients, dissolved organic carbon, and trace elements (aluminum, arsenic, barium, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, copper, iron, lead, lithium, manganese, mercury, molybdenum, nickel, selenium, silver, strontium, vanadium and zinc). Radionuclide activities and stable isotope (5 values also were determined, including: gross-alpha-particle activity, gross-beta-particle activity, radium-226, radium-228, radon-222, uranium-234, uranium-235, uranium-238, total uranium, carbon-13/carbon-12, deuterium/hydrogen-1, oxygen-18/oxygen-16, and sulfur-34/sulfur-32. Additional analyses of arsenic and selenium species are presented for selected samples as well as analyses of density and iodine for two samples, tritium for three samples, and carbon-14 for one sample.Geophysical logs for most test holes include caliper, neutron, gamma-gamma, natural-gamma logs, spontaneous potential, long- and short-normal resistivity, and single-point resistance. Logs for test-hole NOTS 7 do not include long- and short-normal resistivity, spontaneous-potential, or single-point resistivity. Logs for test-hole NOTS 7A include only caliper and natural-gamma logs.
... medication. Protein Synthesis Inhibitor Therapy. Omacetaxine (Synribo®) a non-TKI, chemotherapy drug, is approved for chronic and ... as BCR-ABL 10%. This reduction is approximately equivalent to a major cytogenetic response. {{ A 2-log ...
Hou, Fang; Huang, Chang-Bing; Lesmes, Luis; Feng, Li-Xia; Tao, Liming; Zhou, Yi-Feng; Lu, Zhong-Lin
2010-01-01
Purpose. The qCSF method is a novel procedure for rapid measurement of spatial contrast sensitivity functions (CSFs). It combines Bayesian adaptive inference with a trial-to-trial information gain strategy, to directly estimate four parameters defining the observer's CSF. In the present study, the suitability of the qCSF method for clinical application was examined. Methods. The qCSF method was applied to rapidly assess spatial CSFs in 10 normal and 8 amblyopic participants. The qCSF was evaluated for accuracy, precision, test–retest reliability, suitability of CSF model assumptions, and accuracy of amblyopia screening. Results. qCSF estimates obtained with as few as 50 trials matched those obtained with 300 Ψ trials. The precision of qCSF estimates obtained with 120 and 130 trials, in normal subjects and amblyopes, matched the precision of 300 Ψ trials. For both groups and both methods, test–retest sensitivity estimates were well matched (all R > 0.94). The qCSF model assumptions were valid for 8 of 10 normal participants and all amblyopic participants. Measures of the area under log CSF (AULCSF) and the cutoff spatial frequency (cutSF) were lower in the amblyopia group; these differences were captured within 50 qCSF trials. Amblyopia was detected at an approximately 80% correct rate in 50 trials, when a logistic regression model was used with AULCSF and cutSF as predictors. Conclusions. The qCSF method is sufficiently rapid, accurate, and precise in measuring CSFs in normal and amblyopic persons. It has great potential for clinical practice. PMID:20484592
Bouldin, Alicia S.; Holmes, Erin R.; Fortenberry, Michael L.
2006-01-01
Objective Web log technology was applied to a reflective journaling exercise in a communication course during the second-professional year at the University of Mississippi School of Pharmacy, to encourage students to reflect on course concepts and apply them to the environment outside the classroom, and to assess their communication performance. Design Two Web log entries per week were required for full credit. Web logs were evaluated at three points during the term. At the end of the course, students evaluated the assignment using a 2-page survey instrument. Assessment The assignment contributed to student learning and increased awareness level for approximately 40% of the class. Students had few complaints about the logistics of the assignment. Conclusion The Web log technology was a useful tool for reflective journaling in this communications course. Future versions of the assignment will benefit from student feedback from this initial experience. PMID:17136203
The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency?
White, Sarah J; Drieghe, Denis; Liversedge, Simon P; Staub, Adrian
2016-10-20
The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed.
Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.
Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent
2018-02-01
The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Exploring the Chemical Link Between Local Ellipticals and Their High-Redshift Progenitors
NASA Technical Reports Server (NTRS)
Leja, Joel; Van Dokkum, Pieter G.; Momcheva, Ivelina; Brammer, Gabriel; Skelton, Rosalind E.; Whitaker, Katherine E.; Andrews, Brett H.; Franx, Marijn; Kriek, Mariska; Van Der Wel, Arjen;
2013-01-01
We present Keck/MOSFIRE K-band spectroscopy of the first mass-selected sample of galaxies at z approximately 2.3. Targets are selected from the 3D-Hubble Space Telescope Treasury survey. The six detected galaxies have a mean [N II]lambda6584/H-alpha ratio of 0.27 +/- 0.01, with a small standard deviation of 0.05. This mean value is similar to that of UV-selected galaxies of the same mass. The mean gas-phase oxygen abundance inferred from the [N II]/Halpha ratios depends on the calibration method, and ranges from 12+log(O/H)(sub gas) = 8.57 for the Pettini & Pagel calibration to 12+log(O/H)(sub gas) = 8.87 for the Maiolino et al. calibration. Measurements of the stellar oxygen abundance in nearby quiescent galaxies with the same number density indicate 12+log(O/H)(sub stars) = 8.95, similar to the gas-phase abundances of the z approximately 2.3 galaxies if the Maiolino et al. calibration is used. This suggests that these high-redshift star forming galaxies may be progenitors of today's massive early-type galaxies. The main uncertainties are the absolute calibration of the gas-phase oxygen abundance and the incompleteness of the z approximately 2.3 sample: the galaxies with detected Ha tend to be larger and have higher star formation rates than the galaxies without detected H-alpha, and we may still be missing the most dust-obscured progenitors.
Twining, Brian V.; Hodges, Mary K.V.; Schusler, Kyle; Mudge, Christopher
2017-07-27
Starting in 2014, the U.S. Geological Survey in cooperation with the U.S. Department of Energy, drilled and constructed boreholes USGS 142 and USGS 142A for stratigraphic framework analyses and long-term groundwater monitoring of the eastern Snake River Plain aquifer at the Idaho National Laboratory in southeast Idaho. Borehole USGS 142 initially was cored to collect rock and sediment core, then re-drilled to complete construction as a screened water-level monitoring well. Borehole USGS 142A was drilled and constructed as a monitoring well after construction problems with borehole USGS 142 prevented access to upper 100 feet (ft) of the aquifer. Boreholes USGS 142 and USGS 142A are separated by about 30 ft and have similar geology and hydrologic characteristics. Groundwater was first measured near 530 feet below land surface (ft BLS) at both borehole locations. Water levels measured through piezometers, separated by almost 1,200 ft, in borehole USGS 142 indicate upward hydraulic gradients at this location. Following construction and data collection, screened water-level access lines were placed in boreholes USGS 142 and USGS 142A to allow for recurring water level measurements.Borehole USGS 142 was cored continuously, starting at the first basalt contact (about 4.9 ft BLS) to a depth of 1,880 ft BLS. Excluding surface sediment, recovery of basalt, rhyolite, and sediment core at borehole USGS 142 was approximately 89 percent or 1,666 ft of total core recovered. Based on visual inspection of core and geophysical data, material examined from 4.9 to 1,880 ft BLS in borehole USGS 142 consists of approximately 45 basalt flows, 16 significant sediment and (or) sedimentary rock layers, and rhyolite welded tuff. Rhyolite was encountered at approximately 1,396 ft BLS. Sediment layers comprise a large percentage of the borehole between 739 and 1,396 ft BLS with grain sizes ranging from clay and silt to cobble size. Sedimentary rock layers had calcite cement. Basalt flows ranged in thickness from about 2 to 100 ft and varied from highly fractured to dense, and ranged from massive to diktytaxitic to scoriaceous, in texture.Geophysical logs were collected on completion of drilling at boreholes USGS 142 and USGS 142A. Geophysical logs were examined with available core material to describe basalt, sediment and sedimentary rock layers, and rhyolite. Natural gamma logs were used to confirm sediment layer thickness and location; neutron logs were used to examine basalt flow units and changes in hydrogen content; gamma-gamma density logs were used to describe general changes in rock properties; and temperature logs were used to understand hydraulic gradients for deeper sections of borehole USGS 142. Gyroscopic deviation was measured to record deviation from true vertical at all depths in boreholes USGS 142 and USGS 142A.
Yamane, I; Arai, S; Nakamura, Y; Hisashi, M; Fukazawa, Y; Onuki, T
2000-02-01
A clinical trial was performed to compare the effects of flumethrin and ivermectin treatments of grazing heifers at one farm in central Japan. 64 heifers were randomly allocated into two groups. Flumethrin (1 mg/kg pour on) was applied approximately once every 3 weeks to heifers in one group and heifers in the second group were injected approximately once every month with ivermectin (200 microg/kg; id). Between groups, no significant differences were detected in the proportions of animals that showed parasitemia of Theileria sergenti and conception risks. Significantly lower average log-transformed nematode-egg counts and higher average daily weight gain were observed in the ivermectin-treated group. Animals with higher body weight at the start of grazing and lower log-transformed total nematode-egg and coccidia-oocyst counts had higher odds of conceiving. Animals with ivermectin treatment, lower body weight at the start of grazing and lower log-transformed coccidia-oocyst count had higher daily weight gain. Ivermectin may be more useful in this farm because of the higher productivity for cattle and lower cost for its usage.
Kellogg, James A.; Atria, Peter V.; Sanders, Jeffrey C.; Eyster, M. Elaine
2001-01-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at −70°C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at ≤−70°C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of ≥1,000 copies/ml at CPAL, 148 (72%) of the results varied by ≤0.20 log10 when tested at Hershey and none varied by >0.50 log10. However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log10 when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log10 while the maximum interrun variation was 0.52 log10. High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log10, while the maximum interrun variation was 0.55 log10. In our patient population, a change in bDNA HIV-1 RNA results of ≤0.50 log10 over time most likely represents normal laboratory test variation. However, a change of >0.50 log10, especially if the results are >1,000 copies/ml, is likely to be significant. PMID:11329458
Kellogg, J A; Atria, P V; Sanders, J C; Eyster, M E
2001-05-01
Normal assay variation associated with bDNA tests for human immunodeficiency virus type 1 (HIV-1) RNA performed at two laboratories with different levels of test experience was investigated. Two 5-ml aliquots of blood in EDTA tubes were collected from each patient for whom the HIV-1 bDNA test was ordered. Blood was stored for no more than 4 h at room temperature prior to plasma separation. Plasma was stored at -70 degrees C until transported to the Central Pennsylvania Alliance Laboratory (CPAL; York, Pa.) and to the Hershey Medical Center (Hershey, Pa.) on dry ice. Samples were stored at < or =-70 degrees C at both laboratories prior to testing. Pools of negative (donor), low-HIV-1-RNA-positive, and high-HIV-1-RNA-positive plasma samples were also repeatedly tested at CPAL to determine both intra- and interrun variation. From 11 August 1999 until 14 September 2000, 448 patient specimens were analyzed in parallel at CPAL and Hershey. From 206 samples with results of > or =1,000 copies/ml at CPAL, 148 (72%) of the results varied by < or =0.20 log(10) when tested at Hershey and none varied by >0.50 log(10). However, of 242 specimens with results of <1,000 copies/ml at CPAL, 11 (5%) of the results varied by >0.50 log(10) when tested at Hershey. Of 38 aliquots of HIV-1 RNA pool negative samples included in 13 CPAL bDNA runs, 37 (97%) gave results of <50 copies/ml and 1 (3%) gave a result of 114 copies/ml. Low-positive HIV-1 RNA pool intrarun variation ranged from 0.06 to 0.26 log(10) while the maximum interrun variation was 0.52 log(10). High-positive HIV-1 RNA pool intrarun variation ranged from 0.04 to 0.32 log(10), while the maximum interrun variation was 0.55 log(10). In our patient population, a change in bDNA HIV-1 RNA results of < or =0.50 log(10) over time most likely represents normal laboratory test variation. However, a change of >0.50 log(10), especially if the results are >1,000 copies/ml, is likely to be significant.
Electronic neutron sources for compensated porosity well logging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, A. X.; Antolak, A. J.; Leung, K. -N.
2012-08-01
The viability of replacing Americium–Beryllium (Am–Be) radiological neutron sources in compensated porosity nuclear well logging tools with D–T or D–D accelerator-driven neutron sources is explored. The analysis consisted of developing a model for a typical well-logging borehole configuration and computing the helium-3 detector response to varying formation porosities using three different neutron sources (Am–Be, D–D, and D–T). The results indicate that, when normalized to the same source intensity, the use of a D–D neutron source has greater sensitivity for measuring the formation porosity than either an Am–Be or D–T source. The results of the study provide operational requirements that enablemore » compensated porosity well logging with a compact, low power D–D neutron generator, which the current state-of-the-art indicates is technically achievable.« less
Double stars with wide separations in the AGK3 - II. The wide binaries and the multiple systems*
NASA Astrophysics Data System (ADS)
Halbwachs, J.-L.; Mayor, M.; Udry, S.
2017-02-01
A large observation programme was carried out to measure the radial velocities of the components of a selection of common proper motion (CPM) stars to select the physical binaries. 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. By adding CPM stars with separations close enough to be almost certain that they are physical, a bias-controlled sample of 116 WBs was obtained, and used to derive the distribution of separations from 100 to 30 000 au. The distribution obtained does not match the log-constant distribution, but agrees with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical information about the multiple systems. The close binaries in WBs seem to be like those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems agrees with the no correlation hypothesis; this indicates that an environment conducive to the formation of WBs does not favour the formation of subsystems with periods shorter than 10 yr.
Davatzes, Nicholas C.; Hickman, Stephen H.
2009-01-01
A suite of geophysical logs has been acquired for structural, fluid flow and stress analysis of well 27-15 in the Desert Peak Geothermal Field, Nevada, in preparation for stimulation and development of an Enhanced Geothermal System (EGS). Advanced Logic Technologies Borehole Televiewer (BHTV) and Schlumberger Formation MicroScanner (FMS) image logs reveal extensive drilling-induced tensile fractures, showing that the current minimum compressive horizontal stress, Shmin, in the vicinity of well 27-15 is oriented along an azimuth of 114±17°. This orientation is consistent with the dip direction of recently active normal faults mapped at the surface and with extensive sets of fractures and some formation boundaries seen in the BHTV and FMS logs. Temperature and spinner flowmeter surveys reveal several minor flowing fractures that are well oriented for normal slip, although over-all permeability in the well is quite low. These results indicate that well 27-15 is a viable candidate for EGS stimulation and complements research by other investigators including cuttings analysis, a reflection seismic survey, pressure transient and tracer testing, and micro-seismic monitoring.
Measuring colour rivalry suppression in amblyopia.
Hofeldt, T S; Hofeldt, A J
1999-11-01
To determine if the colour rivalry suppression is an index of the visual impairment in amblyopia and if the stereopsis and fusion evaluator (SAFE) instrument is a reliable indicator of the difference in visual input from the two eyes. To test the accuracy of the SAFE instrument for measuring the visual input from the two eyes, colour rivalry suppression was measured in six normal subjects. A test neutral density filter (NDF) was placed before one eye to induce a temporary relative afferent defect and the subject selected the NDF before the fellow eye to neutralise the test NDF. In a non-paediatric private practice, 24 consecutive patients diagnosed with unilateral amblyopia were tested with the SAFE. Of the 24 amblyopes, 14 qualified for the study because they were able to fuse images and had no comorbid disease. The relation between depth of colour rivalry suppression, stereoacuity, and interocular difference in logMAR acuity was analysed. In normal subjects, the SAFE instrument reversed temporary defects of 0.3 to 1. 8 log units to within 0.6 log units. In amblyopes, the NDF to reverse colour rivalry suppression was positively related to interocular difference in logMAR acuity (beta=1.21, p<0.0001), and negatively related to stereoacuity (beta=-0.16, p=0.019). The interocular difference in logMAR acuity was negatively related to stereoacuity (beta=-0.13, p=0.009). Colour rivalry suppression as measured with the SAFE was found to agree closely with the degree of visual acuity impairment in non-paediatric patients with amblyopia.
Effect of stimulus configuration on crowding in strabismic amblyopia.
Norgett, Yvonne; Siderov, John
2017-11-01
Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.
Novel denture-cleaning system based on hydroxyl radical disinfection.
Kanno, Taro; Nakamura, Keisuke; Ikai, Hiroyo; Hayashi, Eisei; Shirato, Midori; Mokudai, Takayuki; Iwasawa, Atsuo; Niwano, Yoshimi; Kohno, Masahiro; Sasaki, Keiichi
2012-01-01
The purpose of this study was to evaluate a new denture-cleaning device using hydroxyl radicals generated from photolysis of hydrogen peroxide (H2O2). Electron spin resonance analysis demonstrated that the yield of hydroxyl radicals increased with the concentration of H2O2 and light irradiation time. Staphylococcus aureus, Pseudomonas aeruginosa, and methicillin-resistant S aureus were killed within 10 minutes with a > 5-log reduction when treated with photolysis of 500 mM H2O2; Candida albicans was killed within 30 minutes with a > 4-log reduction with photolysis of 1,000 mM H2O2. The clinical test demonstrated that the device could effectively reduce microorganisms in denture plaque by approximately 7-log order within 20 minutes.
Kuda, Takashi; Kosaka, Misa; Hirano, Shino; Kawahara, Miho; Sato, Masahiro; Kaneshima, Tai; Nishizawa, Makoto; Takahashi, Hajime; Kimura, Bon
2015-07-10
Brown algal polysaccharides such as alginate, polymers of uronic acids, and laminaran, beta-1,3 and 1,6-glucan, can be fermented by human intestinal microbiota. To evaluate the effects of these polysaccharides on infections caused by food poisoning pathogens, we investigated the adhesion and invasion of pathogens (Salmonella Typhimurium, Listeria monocytogenes and Vibrio parahaemolyticus) in human enterocyte-like HT-29-Luc cells and in infections caused in BALB/c mice. Both sodium Na-alginate and laminaran (0.1% each) inhibited the adhesion of the pathogens to HT-29-Luc cells by approximately 70-90%. The invasion of S. Typhimurium was also inhibited by approximately 70 and 80% by Na-alginate and laminaran, respectively. We observed that incubation with Na-alginate for 18 h increased the transepithelial electrical resistance of HT-29-Luc monolayer cells. Four days after inoculation with 7 log CFU/mouse of S. Typhimurium, the faecal pathogen count in mice that were not fed polysaccharides (control mice) was about 6.5 log CFU/g while the count in mice that were fed Na-alginate had decreased to 5.0 log CFU/g. The liver pathogen count, which was 4.1 log CFU/g in the control mice, was also decreased in mice that were fed Na-alginate. In contrast, the mice that were fed laminaran exhibited a more severe infection than that exhibited by control mice. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiedemeier, Heribert, E-mail: wiedeh@rpi.ed
Correlations of computed Schottky constants (K{sub S}=[V''{sub Zn}][V{sub S}{sup ..}]) with structural and thermodynamic properties showed linear dependences of log K{sub S} on the lattice energies for the Zn-, Cd-, Hg-, Mg-, and Sr-chalcogenides and for the Na- and K-halides. These findings suggest a basic relation between the Schottky constants and the lattice energies for these families of compounds from different parts of the Periodic Table, namely, {Delta}H{sub T,L}{sup o}=-(2.303nRT log K{sub S})+2.303nRm{sub b}+2.303nRTi{sub b}. {Delta}H{sub T,L}{sup o} is the experimental (Born-Haber) lattice energy (enthalpy), n is a constant approximately equal to the formal valence (charge) of the material, m{submore » b} and i{sub b} are the slope and intercept, respectively, of the intercept b (of the log K{sub S} versus {Delta}H{sub L}{sup o} linear relation) versus the reciprocal temperature. The results of this work also provide an empirical correlation between the Gibbs free energy of vacancy formation and the lattice energy. - Graphical abstract: For the Zn-chalcogenides, the quantities n and I{sub e} are 2.007 and 650.3 kcal (2722 kJ), respectively. For the other groups of compounds, they are approximately equal to the formal valences and ionization energies of the metals: Log K{sub S{approx}}-(2.303nRT){sup -1} (0.99{Delta}H{sup o}{sub T,L}-I{sub e}).« less
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).
Aldega, L.; Eberl, D.D.
2005-01-01
Illite crystals in siliciclastic sediments are heterogeneous assemblages of detrital material coming from various source rocks and, at paleotemperatures >70 ??C, of superimposed diagenetic modification in the parent sediment. We distinguished the relative proportions of 2M1 detrital illite and possible diagenetic 1Md + 1M illite by a combined analysis of crystal-size distribution and illite polytype quantification. We found that the proportions of 1Md + 1M and 2M1 illite could be determined from crystallite thickness measurements (BWA method, using the MudMaster program) by unmixing measured crystallite thickness distributions using theoretical and calculated log-normal and/or asymptotic distributions. The end-member components that we used to unmix the measured distributions were three asymptotic-shaped distributions (assumed to be the diagenetic component of the mixture, the 1Md + 1M polytypes) calculated using the Galoper program (Phase A was simulated using 500 crystals per cycle of nucleation and growth, Phase B = 333/cycle, and Phase C = 250/ cycle), and one theoretical log-normal distribution (Phase D, assumed to approximate the detrital 2M1 component of the mixture). In addition, quantitative polytype analysis was carried out using the RockJock software for comparison. The two techniques gave comparable results (r2 = 0.93), which indicates that the unmixing method permits one to calculate the proportion of illite polytypes and, therefore, the proportion of 2M1 detrital illite, from crystallite thickness measurements. The overall illite crystallite thicknesses in the samples were found to be a function of the relative proportions of thick 2M1 and thin 1Md + 1M illite. The percentage of illite layers in I-S mixed layers correlates with the mean crystallite thickness of the 1Md + 1M polytypes, indicating that these polytypes, rather than the 2M1 polytype, participate in I-S mixed layering.
First Test of Stochastic Growth Theory for Langmuir Waves in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.
1997-01-01
This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(logE) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(logE) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(logE) distribution is a power-law with index approximately -1; this is interpreted in terms of convolution of intrinsic, spatially varying P(logE) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.
Giddings Edwards (Cretaceous) field, south Texas: carbonate channel or elongate buildup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomando, A.J.; Mazzullo, S.J.
1989-03-01
Giddings Edwards field, located in Fayette County, Texas, is situated on the broad Cretaceous (Albian) shallow shelf, approximately 20 mi north of the main Edwards shelf-margin reef trend. The Giddings field produces gas from an elongate stratigraphic trap approximately 9.5 mi long and 1.8 mi wide, encased in argillaceous lime mudstones and shales; the field is oriented normal to the contiguous Edwards reef trend. Available cores and cuttings samples from the central portion of the field indicate that the field reservoir is composed of biopackstones and grainstones interpreted to have been deposited in a high-energy shelf environment. The facies systemmore » is characterized by stacked reservoirs having a maximum gross pay thickness of over 100 ft, containing primary interparticle and secondary biomoldic porosity, both of which have been modified slightly by chemical compaction and partial occlusion by sparry calcite and saddle dolomite cements. Despite reasonable subsurface sample and mechanical log control within and surrounding the field, its depositional origin remains equivocal. Such uncertainty has important bearing on predictive models for the exploration for additional Edwards shelfal hydrocarbon reservoirs. The elongate, biconvex geometry of the productive carbonate sands, their northward thinning, and apparent updip bifurcation suggest deposition in a shallow-shelf channel system. By contrast, an alternative correlation and interpretation based on geometry and facies is that of an elongate in-situ carbonate buildup. A number of modern analogs of elongate buildups normal to major reef systems are available from which to compare and model the depositional system of Giddings Edwards field. The evaluation of this field serves as an example of using a multiple working hypothesis to develop an accurate exploration model.« less
Rangel-Vargas, Esmeralda; Luna-Rojo, Anais M; Cadena-Ramírez, Arturo; Torres-Vitela, Refugio; Gómez-Aldapa, Carlos A; Villarruel-López, Angélica; Téllez-Jurado, Alejandro; Villagómez-Ibarra, José R; Reynoso-Camacho, Rosalía; Castro-Rosas, Javier
2018-05-01
The behavior of foodborne bacteria on whole and cut mangoes and the antibacterial effect of Hibiscus sabdariffa calyx extracts and chemical sanitizers against foodborne bacteria on contaminated mangoes were investigated. Mangoes var. Ataulfo and Kent were used in the study. Mangoes were inoculated with Listeria monocytogenes, Shigella flexneri, Salmonella Typhimurium, Salmonella Typhi, Salmonella Montevideo, Escherichia coli strains (O157:H7, non-O157:H7 Shiga toxin-producing, enteropathogenic, enterotoxigenic, enteroinvasive, and enteroaggregative). The antibacterial effect of five roselle calyx extracts (water, ethanol, methanol, acetone, and ethyl acetate), sodium hypochlorite, colloidal silver, and acetic acid against foodborne bacteria were evaluated on contaminated mangoes. The dry extracts obtained with ethanol, methanol, acetone, and ethyl acetate were analyzed by nuclear magnetic resonance spectroscopy to determine solvent residues. Separately, contaminated whole mangoes were immersed in five hibiscus extracts and in sanitizers for 5 min. All foodborne bacteria attached to mangoes. After 20 days at 25 ± 2°C, all foodborne bacterial strains on whole Ataulfo mangoes had decreased by approximately 2.5 log, and on Kent mangoes by approximately 2 log; at 3 ± 2°C, they had decreased to approximately 1.9 and 1.5 log, respectively, on Ataulfo and Kent. All foodborne bacterial strains grew on cut mangoes at 25 ± 2°C; however, at 3 ± 2°C, bacterial growth was inhibited. Residual solvents were not detected in any of the dry extracts by nuclear magnetic resonance. Acetonic, ethanolic, and methanolic roselle calyx extracts caused a greater reduction in concentration (2 to 2.6 log CFU/g) of all foodborne bacteria on contaminated whole mangoes than the sodium hypochlorite, colloidal silver, and acetic acid. Dry roselle calyx extracts may be a potentially useful addition to disinfection procedures of mangoes.
NASA Astrophysics Data System (ADS)
Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António
2017-05-01
In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).
Present-day stress state analysis on the Big Island of Hawaíi, USA
NASA Astrophysics Data System (ADS)
Pierdominici, Simona; Kueck, Jochem; Millett, John; Planke, Sverre; Jerram, Dougal A.; Haskins, Eric; Thomas, Donald
2017-04-01
We analyze and interpret the stress features from a c. 1.5 km deep fully cored borehole (PTA2) on the Big Island of Hawaíi within the Humúula saddle region, between the Mauna Kea and Mauna Loa volcanoes. The Big Island of Hawaii comprises the largest and youngest island of the Hawaiian-Emperor seamount chain and is volumetrically dominated by shield stage tholeiitic volcanic rocks. Mauna Kea is dormant whereas Mauna Loa is still active. There are also a series of normal faults on Mauna Loa's northern and western slopes, between its two major rift zones, that are believed to be the result of combined circumferential tension from the two rift zones and from added pressure due to the westward growth of the neighboring Kīlauea volcano. The PTA2 borehole was drilled in 2013 into lava dominated formation (Pahoehoe and Aā) as part of the Humúula Groundwater Research Project (HGPR) with the purpose of characterizing the groundwater resource potential in this area. In 2016 two downhole logging campaigns were performed by the Operational Support Group of the International Continental Scientific Drilling Program (ICDP) to acquire a set of geophysical data as part of the Volcanic Margin Petroleum Prospectivity (VMAPP) project. The main objective of the logging campaign was to obtain high quality wireline log data to enable a detailed core-log integration of the volcanic sequence and to improve understanding of the subsurface expression of volcanic rocks. We identify stress features (e.g. borehole breakouts) and volcanic structures (e.g. flow boundaries, vesicles and jointing) at depth using borehole images acquired with an ABI43 acoustic borehole televiewer. We analyzed and interpreted the stress indicators and compared their orientation with the regional stress pattern. We identified a set of stress indicators along the hole dominantly concentrated within the lower logged interval of the PTA2 borehole. Two primary horizontal stress indicators have been taken into account: borehole breakouts (bidirectional enlargements) (BB) and drilling induced tensile fractures (DIF). BB and DIF occur when the stresses around the borehole exceed the compressive and tensile yield stress of the borehole wall rock respectively causing failure. A breakout is caused by the development of intersecting conjugate shear planes that cause pieces of the borehole wall to spall off. For a breakout to develop, the stress concentration around a vertical borehole is largest in the direction of the minimum horizontal stress. Hence, BB develops approximately parallel to the orientation of the minimum horizontal stress. For the DIF, the stress concentration around a vertical borehole is at a minimum in the maximum horizontal stress direction. Hence, DIF develop approximately parallel to the orientation of the maximum horizontal stress. Based on the World Stress Map, the present-day stress in this area is defined only by focal mechanism solutions. These data give a unique opportunity to characterize the orientation of the present-day stress field between two large volume shield volcanoes on an active volcanic island using a different approach and stress indicators.
Wealth and price distribution by diffusive approximation in a repeated prediction market
NASA Astrophysics Data System (ADS)
Bottazzi, Giulio; Giachini, Daniele
2017-04-01
The approximate agents' wealth and price invariant densities of a repeated prediction market model is derived using the Fokker-Planck equation of the associated continuous-time jump process. We show that the approximation obtained from the evolution of log-wealth difference can be reliably exploited to compute all the quantities of interest in all the acceptable parameter space. When the risk aversion of the trader is high enough, we are able to derive an explicit closed-form solution for the price distribution which is asymptotically correct.
Waters, Brian W; Hung, Yen-Con
2014-04-01
Chlorinated water and electrolyzed oxidizing (EO) water solutions were made to compare the free chlorine stability and microbicidal efficacy of chlorine-containing solutions with different properties. Reduction of Escherichia coli O157:H7 was greatest in fresh samples (approximately 9.0 log CFU/mL reduction). Chlorine loss in "aged" samples (samples left in open bottles) was greatest (approximately 40 mg/L free chlorine loss in 24 h) in low pH (approximately 2.5) and high chloride (Cl(-) ) concentrations (greater than 150 mg/L). Reduction of E. coli O157:H7 was also negatively impacted (<1.0 log CFU/mL reduction) in aged samples with a low pH and high Cl(-) . Higher pH values (approximately 6.0) did not appear to have a significant effect on free chlorine loss or numbers of surviving microbial cells when fresh and aged samples were compared. This study found chloride levels in the chlorinated and EO water solutions had a reduced effect on both free chlorine stability and its microbicidal efficacy in the low pH solutions. Greater concentrations of chloride in pH 2.5 samples resulted in decreased free chlorine stability and lower microbicidal efficacy. © 2014 Institute of Food Technologists®
The Italian primary school-size distribution and the city-size: a complex nexus
Belmonte, Alessandro; Di Clemente, Riccardo; Buldyrev, Sergey V.
2014-01-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. PMID:24954714
Development of Stable, Low Resistance Solder Joints for a Space-Flight HTS Lead Assemblies
NASA Technical Reports Server (NTRS)
Canavan, Edgar R.; Chiao, Meng; Panashchenko, Lyudmyla; Sampson, Michael
2017-01-01
The solder joints in spaceflight high temperature superconductor (HTS) lead assemblies for certain astrophysics missions have strict constraints on size and power dissipation. In addition, the joints must tolerate years of storage at room temperature, many thermal cycles, and several vibration tests between their manufacture and their final operation on orbit. As reported previously, solder joints between REBCO coated conductors and normal metal traces for the Astro-H mission showed low temperature joint resistance that grew approximately as log time over the course of months. Although the assemblies worked without issue in orbit, for the upcoming X-ray Astrophysics Recovery Mission we are attempting to improve our solder process to give lower, more stable, and more consistent joint resistance. We produce numerous sample joints and measure time- and thermal cycle-dependent resistance, and characterize the joints using x-ray and other analysis tools. For a subset of the joints, we use SEMEDS to try to understand the physical and chemical processes that effect joint behavior.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Photoballistics of volcanic jet activity at Stromboli, Italy
NASA Technical Reports Server (NTRS)
Chouet, B.; Hamisevicz, N.; Mcgetchin, T. R.
1974-01-01
Two night eruptions of the volcano Stromboli were studied through 70-mm photography. Single-camera techniques were used. Particle sphericity, constant velocity in the frame, and radial symmetry were assumed. Properties of the particulate phase found through analysis include: particle size, velocity, total number of particles ejected, angular dispersion and distribution in the jet, time variation of particle size and apparent velocity distribution, averaged volume flux, and kinetic energy carried by the condensed phase. The frequency distributions of particle size and apparent velocities are found to be approximately log normal. The properties of the gas phase were inferred from the fact that it was the transporting medium for the condensed phase. Gas velocity and time variation, volume flux of gas, dynamic pressure, mass erupted, and density were estimated. A CO2-H2O mixture is possible for the observed eruptions. The flow was subsonic. Velocity variations may be explained by an organ pipe resonance. Particle collimation may be produced by a Magnus effect.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
Radiation exposure assessment for portsmouth naval shipyard health studies.
Daniels, R D; Taulbee, T D; Chen, P
2004-01-01
Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.
NASA Astrophysics Data System (ADS)
Gershenson, Carlos
Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it
Physical Properties of Gas Hydrates: A Review
Gabitto, Jorge F.; Tsouris, Costas
2010-01-01
Memore » thane gas hydrates in sediments have been studied by several investigators as a possible future energy resource. Recent hydrate reserves have been estimated at approximately 10 16 m 3 of methane gas worldwide at standard temperature and pressure conditions. In situ dissociation of natural gas hydrate is necessary in order to commercially exploit the resource from the natural-gas-hydrate-bearing sediment. The presence of gas hydrates in sediments dramatically alters some of the normal physical properties of the sediment. These changes can be detected by field measurements and by down-hole logs. An understanding of the physical properties of hydrate-bearing sediments is necessary for interpretation of geophysical data collected in field settings, borehole, and slope stability analyses; reservoir simulation; and production models. This work reviews information available in literature related to the physical properties of sediments containing gas hydrates. A brief review of the physical properties of bulk gas hydrates is included. Detection methods, morphology, and relevant physical properties of gas-hydrate-bearing sediments are also discussed.« less
Analysis of cell mechanics in single vinculin-deficient cells using a magnetic tweezer
NASA Technical Reports Server (NTRS)
Alenghat, F. J.; Fabry, B.; Tsai, K. Y.; Goldmann, W. H.; Ingber, D. E.
2000-01-01
A magnetic tweezer was constructed to apply controlled tensional forces (10 pN to greater than 1 nN) to transmembrane receptors via bound ligand-coated microbeadswhile optically measuring lateral bead displacements within individual cells. Use of this system with wild-type F9 embryonic carcinoma cells and cells from a vinculin knockout mouse F9 Vin (-/-) revealed much larger differences in the stiffness of the transmembrane integrin linkages to the cytoskeleton than previously reported using related techniques that measured average mechanical properties of large cell populations. The mechanical properties measured varied widely among cells, exhibiting an approximately log-normal distribution. The median lateral bead displacement was 2-fold larger in F9 Vin (-/-) cells compared to wild-type cells whereas the arithmetic mean displacement only increased by 37%. We conclude that vinculin serves a greater mechanical role in cells than previously reported and that this magnetic tweezer device may be useful for probing the molecular basis of cell mechanics within single cells. Copyright 2000 Academic Press.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
On the identification of Dragon Kings among extreme-valued outliers
NASA Astrophysics Data System (ADS)
Riva, M.; Neuman, S. P.; Guadagnini, A.
2013-07-01
Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.
Results and interpretation of exploratory drilling near the Picacho Fault, south-central Arizona
Holzer, Thomas L.
1978-01-01
Modern surface faulting along the Picacho fault, east of Picacho, Arizona, has been attributed to ground-water withdrawal. In September 1977, three exploratory test holes were drilled 5 km east of Picacho and across the Picacho fault to investigate subsurface conditions and the mechanism of the faulting. The holes were logged by conventional geophysical and geologic methods. Piezometers were set in each hole and have been monitored since September 1977. The drilling indicates that the unconsolidated alluvium beneath the surface fault is approximately 310 m thick. Drilling and piezometer data and an associated seismic refraction survey indicate that the modern faulting is coincident with a preexisting, high-angle, normal fault that offsets units within the alluvium as well as the underlying bedrock. Piezometer and neutron log data indicate that the preexisting fault behaves as a partial ground-water barrier. Monitoring of the piezometers indicates that the magnitude of the man-induced difference in water level across the preexisting fault is seasonal in nature, essentially disappearing during periods of water-level recovery. The magnitude of the seasonal difference in water level, however, appears to be sufficient to account for the modern fault offset by localized differential compaction caused by a difference in water level across the preexisting fault. In addition, repeated level surveys since September 1977 of bench marks across the surface fault and near the piezometers have indicated fault movement that corresponds to fluctuations of water level.
Post-fire land management: Comparative effects of different strategies on hillslope sediment yield
NASA Astrophysics Data System (ADS)
Cole, R.; Bladon, K. D.; Wagenbrenner, J.; Coe, D. B. R.
2017-12-01
High-severity wildfire can increase erosion on burned, forested hillslopes. Salvage logging is a post-fire land management practice to extract economic value from burned landscapes, reduce fuel loads, and improve forest safety. Few studies assess the impact of post-fire salvage logging or alternative land management approaches on erosion in forested landscapes, especially in California. In September 2015, the Valley Fire burned approximately 31,366 ha of forested land and wildland-urban interface in the California's Northern Coast Range, including most of Boggs Mountain Demonstration State Forest. The primary objective of our study is to quantify erosion rates at the plot scale ( 75 m2) for different post-fire land management practices, including mechanical logging and subsoiling (or ripping) after logging. We measured sediment yields using sediment fences in four sets of replicated plots. We also estimated ground cover in each plot using three randomly positioned 1-meter quadrats. We are also measuring rainfall near each plot to understand hydrologic factors that influence erosion. Preliminary results indicate that burned, unlogged reference plots yielded the most sediment over the winter rainy season (3.3 kg m-2). Sediment yields of burned and logged (0.9 kg m-2), and burned, logged, and ripped (0.7 kg m-2), were substantially lower. Burned and unlogged reference plots had the least ground cover (49%), while ground cover was higher and more similar between logged (65%) and logged and ripped (72%) plots. These initial results contrast with previous studies in which the effect of post-fire salvage logging ranged from no measured impact to increased sediment yield related to salvage logging.
Coyle, David R.; Brissey, Courtney L.; Gandhi, Kamal J. K.
2015-01-02
1. We characterized subcortical insect assemblages in economically important eastern cottonwood (Populus deltoides Bartr.), sycamore (Platanus occidentalis L.) and sweetgum (Liquidambar styraciflua L.) plantations in the southeastern U.S.A. Furthermore, we compared insect responses between freshly-cut plant material by placing traps directly over cut hardwood logs (trap-logs), traps baited with ethanol lures and unbaited (control) traps. 2. We captured a total of 15 506 insects representing 127 species in four families in 2011 and 2013. Approximately 9% and 62% of total species and individuals, respectively, and 23% and 79% of total Scolytinae species and individuals, respectively, were non-native to North America.more » 3. We captured more Scolytinae using cottonwood trap-logs compared with control traps in both years, although this was the case with sycamore and sweetgum only in 2013. More woodborers were captured using cottonwood and sweetgum trap-logs compared with control traps in both years, although only with sycamore in 2013. 4. Ethanol was an effective lure for capturing non-native Scolytinae; however, not all non-native species were captured using ethanol lures. Ambrosiophilus atratus (Eichhoff) and Hypothenemus crudiae (Panzer) were captured with both trap-logs and control traps, whereas Coccotrypes distinctus (Motschulsky) and Xyleborus glabratus Eichhoff were only captured on trap-logs. 5. Indicator species analysis revealed that certain scolytines [e.g. Cnestus mutilates (Blandford) and Xylosandrus crassiusculus (Motschulsky)] showed significant associations with trap-logs or ethanol baits in poplar or sweetgum trap-logs. In general, the species composition of subcortical insects, especially woodboring insects, was distinct among the three tree species and between those associated with trap-logs and control traps.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coyle, David R.; Brissey, Courtney L.; Gandhi, Kamal J. K.
1. We characterized subcortical insect assemblages in economically important eastern cottonwood (Populus deltoides Bartr.), sycamore (Platanus occidentalis L.) and sweetgum (Liquidambar styraciflua L.) plantations in the southeastern U.S.A. Furthermore, we compared insect responses between freshly-cut plant material by placing traps directly over cut hardwood logs (trap-logs), traps baited with ethanol lures and unbaited (control) traps. 2. We captured a total of 15 506 insects representing 127 species in four families in 2011 and 2013. Approximately 9% and 62% of total species and individuals, respectively, and 23% and 79% of total Scolytinae species and individuals, respectively, were non-native to North America.more » 3. We captured more Scolytinae using cottonwood trap-logs compared with control traps in both years, although this was the case with sycamore and sweetgum only in 2013. More woodborers were captured using cottonwood and sweetgum trap-logs compared with control traps in both years, although only with sycamore in 2013. 4. Ethanol was an effective lure for capturing non-native Scolytinae; however, not all non-native species were captured using ethanol lures. Ambrosiophilus atratus (Eichhoff) and Hypothenemus crudiae (Panzer) were captured with both trap-logs and control traps, whereas Coccotrypes distinctus (Motschulsky) and Xyleborus glabratus Eichhoff were only captured on trap-logs. 5. Indicator species analysis revealed that certain scolytines [e.g. Cnestus mutilates (Blandford) and Xylosandrus crassiusculus (Motschulsky)] showed significant associations with trap-logs or ethanol baits in poplar or sweetgum trap-logs. In general, the species composition of subcortical insects, especially woodboring insects, was distinct among the three tree species and between those associated with trap-logs and control traps.« less
Bari, M L; Nakauma, M; Todoriki, S; Juneja, Vijay K; Isshiki, K; Kawamoto, S
2005-02-01
Ionizing radiation can be effective in controlling the growth of food spoilage and foodborne pathogenic bacteria. This study reports on an investigation of the effectiveness of irradiation treatment to eliminate Listeria monocytogenes on laboratory-inoculated broccoli, cabbage, tomatoes, and mung bean sprouts. Irradiation of broccoli and mung bean sprouts at 1.0 kGy resulted in reductions of approximately 4.88 and 4.57 log CFU/g, respectively, of a five-strain cocktail of L. monocytogenes. Reductions of approximately 5.25 and 4.14 log CFU/g were found with cabbage and tomato, respectively, at a similar dose. The appearance, color, texture, taste, and overall acceptability did not undergo significant changes after 7 days of postirradiation storage at 4 degrees C, in comparison with control samples. Therefore, low-dose ionizing radiation treatment could be an effective method for eliminating L. monocytogenes on fresh and fresh-cut produce.
Disinfection of low quality wastewaters by ultraviolet irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zukovs, G.; Kollar, J.; Monteith, H.D.
1986-03-01
Pilot-scale disinfection of simulated combined sewer overflow (CSO) by ultraviolet light (UV) and by high-rate chlorination were compared. Disinfection efficiency was evaluated over a range of dosages and contact times for fecal coliforms, enterococci, P. Aeruginosa, and Salmonella spp. Fecal coliform were reduced 3.0 to 3.2 logs at a UV dose of approximately 350,000..mu.. W s/cm/sup 2/. High-rate chlorination, at a contact time of 2.0 minutes and total residual chlorine concentration of approximately 25 mg/L (as Cl/sub 2/), reduced fecal coliforms by 4.0 logs. Pathogens were reduced to detection limits by both processes. Neither photoreactivation nor regrowth occurred int hemore » disinfected effluents. The estimated capital costs of CSO disinfection by UV irradiation were consistently higher than for chlorination/dechlorination; operation and maintenance costs were similar. 19 references.« less
Computer analysis of digital well logs
Scott, James H.
1984-01-01
A comprehensive system of computer programs has been developed by the U.S. Geological Survey for analyzing digital well logs. The programs are operational on a minicomputer in a research well-logging truck, making it possible to analyze and replot the logs while at the field site. The minicomputer also serves as a controller of digitizers, counters, and recorders during acquisition of well logs. The analytical programs are coordinated with the data acquisition programs in a flexible system that allows the operator to make changes quickly and easily in program variables such as calibration coefficients, measurement units, and plotting scales. The programs are designed to analyze the following well-logging measurements: natural gamma-ray, neutron-neutron, dual-detector density with caliper, magnetic susceptibility, single-point resistance, self potential, resistivity (normal and Wenner configurations), induced polarization, temperature, sonic delta-t, and sonic amplitude. The computer programs are designed to make basic corrections for depth displacements, tool response characteristics, hole diameter, and borehole fluid effects (when applicable). Corrected well-log measurements are output to magnetic tape or plotter with measurement units transformed to petrophysical and chemical units of interest, such as grade of uranium mineralization in percent eU3O8, neutron porosity index in percent, and sonic velocity in kilometers per second.
BRIEF REPORT: A simple interpolation formula for the spectra of power-law and log potentials
NASA Astrophysics Data System (ADS)
Hall, Richard L.
2000-06-01
Non-relativistic potential models are considered of the pure power V(r) = sgn(q) r q and logarithmic V(r) = ln (r) types. It is shown that, from the spectral viewpoint, these potentials are actually in a single family. The log spectra can be obtained from the power spectra by the limit q→0 taken in a smooth representation Pnl(q) for the eigenvalues Enl(q). A simple approximation formula is developed which yields the first 30 eigenvalues with an error less than 0.04%.
Calibration Tests of a German Log Rodmeter
NASA Technical Reports Server (NTRS)
Mottard, Elmo J.; Stillman, Everette R.
1949-01-01
A German log rodmeter of the pitot static type was calibrated in Langley tank no. 1 at speeds up to 34 knots and angles of yaw from 0 deg to plus or minus 10 3/4 degrees. The dynamic head approximated the theoretical head at 0 degrees yaw but decreased as the yaw was increased. The static head was negative and in general became more negative with increasing speed and yaw. Cavitation occurred at speeds above 31 knots at 0 deg yaw and 21 knots at 10 3/4 deg yaw.
Emission Measure Distribution and Heating of Two Active Region Cores
NASA Technical Reports Server (NTRS)
Tripathi, Durgesh; Klimchuk, James A.; Mason, Helen E.
2011-01-01
Using data from the Extreme-ultraviolet Imaging Spectrometer aboard Hinode, we have studied the coronal plasma in the core of two active regions. Concentrating on the area between opposite polarity moss, we found emission measure distributions having an approximate power-law form EM/T(exp 2.4) from log T = 5.55 up to a peak at log T = 6.57. The observations are explained extremely well by a simple nanoflare model. However, in the absence of additional constraints, the observations could possibly also be explained by steady heating.
A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.
Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F
2016-01-01
Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Growth and survival of Salmonella in ground black pepper (Piper nigrum).
Keller, Susanne E; VanDoren, Jane M; Grasso, Elizabeth M; Halik, Lindsay A
2013-05-01
A four serovar cocktail of Salmonella was inoculated into ground black pepper (Piper nigrum) at different water activity (aw) levels at a starting level of 4-5 log cfu/g and incubated at 25 and at 35 °C. At 35 °C and aw of 0.9886 ± 0.0006, the generation time in ground black pepper was 31 ± 3 min with a lag time of 4 ± 1 h. Growth at 25 °C had a longer lag, but generation time was not statistically different from growth at 35 °C. The aw threshold for growth was determined to be 0.9793 ± 0.0027 at 35 °C. To determine survival during storage conditions, ground black pepper was inoculated at approximately 8 log cfu/g and stored at 25 and 35 °C at high (97% RH) and ambient (≤40% RH) humidity. At high relative humidity, aw increased to approximately 0.8-0.9 after approximately 20 days at both temperatures and no Salmonella was detected after 100 and 45 days at 25 and 35 °C, respectively. Under ambient humidity, populations showed an initial decrease of 3-4 log cfu/g, then remained stable for over 8 months at 25 and 35 °C. Results of this study indicate Salmonella can readily grow at permissive aw in ground black pepper and may persist for an extended period of time under typical storage conditions. Published by Elsevier Ltd.
Accurate computation of survival statistics in genome-wide studies.
Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli
2015-05-01
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.
Inactivation of Salmonella Enteritidis on lettuces used by minimally processed vegetable industries.
Silveira, Josete Bailardi; Hessel, Claudia Titze; Tondo, Eduardo Cesar
2017-01-30
Washing and disinfection methods used by minimally processed vegetable industries of Southern Brazil were reproduced in laboratory in order to verify their effectiveness to reduce Salmonella Enteritidis SE86 (SE86) on lettuce. Among the five industries investigated, four carried out washing with potable water followed by disinfection with 200 ppm sodium hypochlorite during different immersion times. The washing procedure alone decreased approximately 1 log CFU/g of SE86 population and immersion times of 1, 2, 5, and 15 minutes in disinfectant solution demonstrated reduction rates ranging from 2.06±0.10 log CFU/g to 3.01±0.21 log CFU/g. Rinsing alone was able to reduce counts from 0.12±0.63 log CFU/g to 1.90±1.07 log CFU/g. The most effective method was washing followed by disinfection with 200 ppm sodium hypochlorite for 15 minutes and final rinse with potable water, reaching 5.83 log CFU/g of reduction. However, no statistical differences were observed on the reduction rates after different immersion times. A time interval of 1 to 2 minutes may be an advantage to the minimally vegetable processed industries in order to optimize the process without putting at risk food safety.
Use of polynomial expressions to describe the bioconcentration of hydrophobic chemicals by fish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connell, D.W.; Hawker, D.W.
1988-12-01
For the bioconcentration of hydrophobic chemicals by fish, relationships have been previously established between uptake rate constants (k1) and the octanol/water partition coefficient (Kow), and also between the clearance rate constant (k2) and Kow. These have been refined and extended on the basis of data for chlorinated hydrocarbons, and closely related compounds including polychlorinated dibenzodioxins, that covered a wider range of hydrophobicity (2.5 less than log Kow less than 9.5). This has allowed the development of new relationships between log Kow and various factors, including the bioconcentration factor (as log KB), equilibrium time (as log teq), and maximum biotic concentrationmore » (as log CB), which include extremely hydrophobic compounds previously not taken into account. The shape of the curves generated by these equations are in qualitative agreement with theoretical prediction and are described by polynomial expressions which are generally approximately linear over the more limited range of log Kow values used to develop previous relationships. The influences of factors such as hydrophobicity, aqueous solubility, molecular weight, lipid solubility, and also exposure time were considered. Decreasing lipid solubilities of extremely hydrophobic chemicals were found to result in increasing clearance rate constants, as well decreasing equilibrium times and bioconcentration factors.« less
Accurate Computation of Survival Statistics in Genome-Wide Studies
Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli
2015-01-01
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620
Analisis fotometrico del cumulo abierto NGC 6611
NASA Astrophysics Data System (ADS)
Suarez Nunez, Johanna
2007-08-01
Matlab programs were designed to apply differential aperture photometry. Two images were taken with a charge-couple device ( CCD ) in the visible V and blue filters, to calculate physical parameters (the flux( f ), the apparent magnitude ( m V ) and its reddening corrected value ( V 0 ), color index ( B- V ) and ( B-V ) 0 , the log of effective temperature (log T eff ), the absolute magnitude ( M V ), the bolometric magnitude ( M B ) & log(L [low *] /[Special characters omitted.] )) of each studied star pertaining to the open cluster NGC 6611. Upon obtaining the parameters, the color-magnitude diagram was graphed and by fitting to the main sequence, the distance modulus and thus the distance to the cluster was found. The stars were assumed to be at the same distance and born at approximately the same moment.
Micromechanical and Electrical Properties of Monolithic Aluminum Nitride at High Temperatures
NASA Technical Reports Server (NTRS)
Goldsby, Jon C.
2000-01-01
Micromechanical spectroscopy of aluminum nitride reveals it to possess extremely low background internal friction at less than 1x10(exp-4) logarithmic decrement (log dec) from 20 to 1200 T. Two mechanical loss peaks were observed, the first at 350 C approximating a single Debye peak with a peak height of 60x10(exp-4) log dec. The second peak was seen at 950 'C with a peak height of 20x 10' log dec and extended from 200 to over 1200 C. These micromechanical observations manifested themselves in the electrical behavior of these materials. Electrical conduction processes were predominately intrinsic. Both mechanical and electrical relaxations appear to be thermally activated processes, with activation energies of 0.78 and 1.32 eV respectively.
Micromechanical and Electrical Properties of Monolithic Aluminum Nitride at High Temperatures
NASA Technical Reports Server (NTRS)
Goldsby, Jon C.
2001-01-01
Micromechanical spectroscopy of aluminum nitride reveals it to possess extremely low background internal friction at less than 1 x 10 (exp -4) logarithmic decrement (log dec.) from 20 to 1200 C. Two mechanical loss peaks were observed, the first at 350 C approximating a single Debye peak with a peak height of 60 x 10 (exp -4) log dec. The second peak was seen at 950 C with a peak height of 20 x 10 (exp -4) log dec. and extended from 200 to over 1200 C. These micromechanical observations manifested themselves in the electrical behavior of these materials. Electrical conduction processes were predominately intrinsic. Both mechanical and electrical relaxations appear to be thermally activated processes, with activation energies of 0.78 and 1.32 eV respectively.
Abundance and Morphological Effects of Large Woody Debris in Forested Basins of Southern Andes
NASA Astrophysics Data System (ADS)
Andreoli, A.; Comiti, F.; Lenzi, M. A.
2006-12-01
The Southern Andes mountain range represents an ideal location for studying large woody debris (LWD) in streams draining forested basins thanks to the presence of both pristine and managed woodland, and to the general low level of human alteration of stream corridors. However, no published investigations have been performed so far in such a large region. The investigated sites of this research are three basins (9-13 km2 drainage area, third-order channels) covered by Nothofagus forests: two of them are located in the Southern Chilean Andes (the Tres Arroyos in the Malalcahuello National Reserve and the Rio Toro within the Malleco Natural Reserve) and one basin lies in the Argentinean Tierra del Fuego (the Buena Esperanza basin, near the city of Ushuaia). Measured LWD were all wood pieces larger than 10 cm in diameter and 1 m in length, both in the active channel and in the adjacent active floodplain. Pieces forming log jams were all measured and the geometrical dimensions of jams were taken. Jam type was defined based on Abbe and Montgomery (2003) classification. Sediment stored behind log-steps and valley jams was evaluated approximating the sediment accumulated to a solid wedge whose geometrical dimensions were measured. Additional information relative to each LWD piece were recorded during the field survey: type (log, rootwad, log with rootwads attached), orientation to flow, origin (floated, bank erosion, landslide, natural mortality, harvest residuals) and position (log-step, in-channel, channel-bridging, channel margins, bankfull edge). In the Tres Arroyos, the average LWD volume stored within the bankfull channel is 710 m3 ha-1. The average number of pieces is 1,004 per hectare of bankfull channel area. Log-steps represent about 22% of all steps, whereas the elevation loss due to LWD (log-steps and valley jams) results in 27% loss of the total stream potential energy. About 1,600 m3 of sediment (assuming a porosity of 20%) is stored in the main channel behind LWD structures approximately, i.e. 1,000 m3 per km of channel length, corresponding to approximately 150% of the annual sediment yield. In the Rio Toro, the average LWD volume and number of elements stored are much less, respectively 117 m3 ha-1 and 215 pieces ha-1. Neither log-steps or valley jams were observed and the longitudinal profile appear not affected by LWD, and no sediment storage can be attributed to woody debris. The low LWD storage and impact in this channel is likely due to the general stability of its hillslopes, in contrast to the Tres Arroyos where extensive landslides and debris flows convey a great deal of wood into the stream. Finally, in the Buena Esperanza, the average LWD volume stored in the active channel is quite low (120 m3 ha-1, but the average number of pieces is the highest with 1,397 pieces ha-1. This is due to the smaller dimensions of LWD elements delivered by trees growing in a colder climate as that characterizing the Tierra del Fuego. The morphological influence of wood in this channel is however very important, with the presence of large valley jams and high log-steps imparting the channel a macro-scale stepped profile with a total energy dissipation due to LWD (log-steps and valley jams) of about 24 % of the stream potential energy. The sediment stored behind log-steps and valley jams results to be about 1,290 m3, i.e. 700 m3 km-1, but unfortunately no values of sediment yields are available for this basin.
Padé approximant for normal stress differences in large-amplitude oscillatory shear flow
NASA Astrophysics Data System (ADS)
Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.
2018-04-01
Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.
NASA Technical Reports Server (NTRS)
Gofford, Jason; Reeves, James N.; Tombesi, Francesco; Braito, Valentina; Turner, T. Jane; Miller, Lance; Cappi, Massimo
2013-01-01
We present the results of a new spectroscopic study of Fe K-band absorption in active galactic nuclei (AGN). Using data obtained from the Suzaku public archive we have performed a statistically driven blind search for Fe XXV Healpha and/or Fe XXVI Lyalpha absorption lines in a large sample of 51 Type 1.0-1.9 AGN. Through extensive Monte Carlo simulations we find that statistically significant absorption is detected at E greater than or approximately equal to 6.7 keV in 20/51 sources at the P(sub MC) greater than or equal tov 95 per cent level, which corresponds to approximately 40 per cent of the total sample. In all cases, individual absorption lines are detected independently and simultaneously amongst the two (or three) available X-ray imaging spectrometer detectors, which confirms the robustness of the line detections. The most frequently observed outflow phenomenology consists of two discrete absorption troughs corresponding to Fe XXV Healpha and Fe XXVI Lyalpha at a common velocity shift. From xstar fitting the mean column density and ionization parameter for the Fe K absorption components are log (N(sub H) per square centimeter)) is approximately equal to 23 and log (Xi/erg centimeter per second) is approximately equal to 4.5, respectively. Measured outflow velocities span a continuous range from less than1500 kilometers per second up to approximately100 000 kilometers per second, with mean and median values of approximately 0.1 c and approximately 0.056 c, respectively. The results of this work are consistent with those recently obtained using XMM-Newton and independently provides strong evidence for the existence of very highly ionized circumnuclear material in a significant fraction of both radio-quiet and radio-loud AGN in the local universe.
Rainford, James L; Hofreiter, Michael; Mayhew, Peter J
2016-01-08
Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.
Nested taxa-area curves for eastern United States floras
Bennett, J.P.
1997-01-01
The slopes of log-log species-area curves have been studied extensively and found to be influenced by the range of areas under study. Two such studies of eastern United States floras have yielded species-area curve slopes which differ by more than 100%: 0.251 and 0.113. The first slope may be too steep because the flora of the world was included, and both may be too steep because noncontiguous areas were used. These two hypotheses were tested using a set of nested floras centered in Ohio and continuing up to the flora of the world. The results suggest that this set of eastern United States floras produces a log-log species-area curve with a slope of approximately 0.20 with the flora of the world excluded, and regardless of whether or not the floras are from nested areas. Genera- and family-area curves are less steep than species-area curves and show similar patterns. Taxa ratio curves also increase with area, with the species/family ratio showing the steepest slope.
Gu, Jiaojiao; Jing, Lulu; Ma, Xiaotao; Zhang, Zhaofeng; Guo, Qianying; Li, Yong
2015-12-01
The present study aimed to explore the metabolic response of oat bran consumption in dyslipidemic rats by a high-throughput metabolomics approach. Four groups of Sprague-Dawley rats were used: N group (normal chow diet), M group (dyslipidemia induced by 4-week high-fat feeding, then normal chow diet), OL group and OH group (dyslipidemia induced, then normal chow diet supplemented with 10.8% or 43.4% naked oat bran). Intervention lasted for 12weeks. Gas chromatography quadrupole time-of-flight mass spectrometry was used to identify serum metabolite profiles. Results confirmed the effects of oat bran on improving lipidemic variables and showed distinct metabolomic profiles associated with diet intervention. A number of endogenous molecules were changed by high-fat diet and normalized following supplementation of naked oat bran. Elevated levels of serum unsaturated fatty acids including arachidonic acid (Log2Fold of change=0.70, P=.02 OH vs. M group), palmitoleic acid (Log2Fold of change=1.24, P=.02 OH vs. M group) and oleic acid (Log2Fold of change=0.66, P=.04 OH vs. M group) were detected after oat bran consumption. Furthermore, consumption of oat bran was also characterized by higher levels of methionine and S-adenosylmethionine. Pathway exploration found that most of the discriminant metabolites were involved in fatty acid biosynthesis, biosynthesis and metabolism of amino acids, microbial metabolism in diverse environments and biosynthesis of plant secondary metabolites. These results point to potential biomarkers and underlying benefit of naked oat bran in the context of diet-induced dyslipidemia and offer some insights into the mechanism exploration. Copyright © 2015 Elsevier Inc. All rights reserved.
The letter contrast sensitivity test: clinical evaluation of a new design.
Haymes, Sharon A; Roberts, Kenneth F; Cruess, Alan F; Nicolela, Marcelo T; LeBlanc, Raymond P; Ramsey, Michael S; Chauhan, Balwantray C; Artes, Paul H
2006-06-01
To compare the reliability, validity, and responsiveness of the Mars Letter Contrast Sensitivity (CS) Test to the Pelli-Robson CS Chart. One eye of 47 normal control subjects, 27 patients with open-angle glaucoma, and 17 with age-related macular degeneration (AMD) was tested twice with the Mars test and twice with the Pelli-Robson test, in random order on separate days. In addition, 17 patients undergoing cataract surgery were tested, once before and once after surgery. The mean Mars CS was 1.62 log CS (0.06 SD) for normal subjects aged 22 to 77 years, with significantly lower values in patients with glaucoma or AMD (P<0.001). Mars test-retest 95% limits of agreement (LOA) were +/-0.13, +/-0.19, and +/-0.24 log CS for normal, glaucoma, and AMD, respectively. In comparison, Pelli-Robson test-retest 95% LOA were +/-0.18, +/-0.19, and +/-0.33 log CS. The Spearman correlation between the Mars and Pelli-Robson tests was 0.83 (P<0.001). However, systematic differences were observed, particularly at the upper-normal end of the range, where Mars CS was lower than Pelli-Robson CS. After cataract surgery, Mars and Pelli-Robson effect size statistics were 0.92 and 0.88, respectively. The results indicate the Mars test has test-retest reliability equal to or better than the Pelli-Robson test and comparable responsiveness. The strong correlation between the tests provides evidence the Mars test is valid. However, systematic differences indicate normative values are likely to be different for each test. The Mars Letter CS Test is a useful and practical alternative to the Pelli-Robson CS Chart.
NASA Astrophysics Data System (ADS)
Pastori, M.; Piccinini, D.; Margheriti, L.; Improta, L.; Valoroso, L.; Chiaraluce, L.; Chiarabba, C.
2009-10-01
Shear wave splitting is measured at 19 seismic stations of a temporary network deployed in the Val d'Agri area to record low-magnitude seismic activity. The splitting results suggest the presence of an anisotropic layer between the surface and 15 km depth (i.e. above the hypocentres). The dominant fast polarization direction strikes NW-SE parallel to the Apennines orogen and is approximately parallel to the maximum horizontal stress in the region, as well as to major normal faults bordering the Val d'Agri basin. The size of the normalized delay times in the study region is about 0.01 s km-1, suggesting 4.5 percent shear wave velocity anisotropy (SWVA). On the south-western flank of the basin, where most of the seismicity occurs, we found larger values of normalized delay times, between 0.017 and 0.02 s km-1. These high values suggest a 10 percent of SWVA. These parameters agree with an interpretation of seismic anisotropy in terms of the Extensive-Dilatancy Anisotropy (EDA) model that considers the rock volume pervaded by fluid-saturated microcracks aligned by the active stress field. Anisotropic parameters are consistent with borehole image logs from deep exploration wells in the Val d'Agri oil field that detect pervasive fluid saturated microcracks striking NW-SE parallel to the maximum horizontal stress in the carbonatic reservoir. However, we cannot rule out the contribution of aligned macroscopic fractures because the main Quaternary normal faults are parallel to the maximum horizontal stress. The strong anisotropy and the seismicity concentration testify for active deformation along the SW flank of the basin.
Bengtsson, Henrik; Hössjer, Ola
2006-03-01
Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.
Hierro, Eva; Ganan, Monica; Barroso, Elvira; Fernández, Manuela
2012-08-01
The efficacy of pulsed light to improve the safety of carpaccio has been investigated. Beef and tuna slices were superficially inoculated with approximately 3 log cfu/cm2 of Listeria monocytogenes, Escherichia coli, Salmonella Typhimurium and Vibrio parahaemolyticus. Fluences of 0.7, 2.1, 4.2, 8.4 and 11.9 J/cm2 were assayed. Colour, sensory and shelf-life studies were carried out. Treatments at 8.4 and 11.9 J/cm2 inactivated the selected pathogens approximately by 1 log cfu/cm2, although they modified the colour parameters and had a negative effect on the sensory quality of the product. The raw attributes were not affected by fluences of 2.1 and 4.2J/cm2 immediately after the treatment, although changes were observed during storage. The inactivation obtained with these fluences was lower than 1 log cfu/cm2, which may not be negligible in case of cross-contamination at a food plant or at a food service facility. Pulsed light showed a greater impact on the sensory quality of tuna carpaccio compared to beef. None of the fluences assayed extended the shelf-life of either product. Copyright © 2012 Elsevier B.V. All rights reserved.
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
NASA Astrophysics Data System (ADS)
Abaza, Mohamed; Mesleh, Raed; Mansour, Ali; Aggoune, el-Hadi
2015-01-01
The performance analysis of a multi-hop decode and forward relaying free-space optical (FSO) communication system is presented in this paper. The considered FSO system uses intensity modulation and direct detection as means of transmission and reception. Atmospheric turbulence impacts are modeled as a log-normal channel, and different weather attenuation effects and geometric losses are taken into account. It is shown that multi-hop is an efficient technique to mitigate such effects in FSO communication systems. A comparison with direct link and multiple-input single-output (MISO) systems considering correlation effects at the transmitter is provided. Results show that MISO multi-hop FSO systems are superior than their counterparts over links exhibiting high attenuation. Monte Carlo simulation results are provided to validate the bit error rate (BER) analyses and conclusions.
NASA Astrophysics Data System (ADS)
Irawan, R.; Yong, B.; Kristiani, F.
2017-02-01
Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.
Zapka, Carrie A.; Campbell, Esther J.; Maxwell, Sheri L.; Gerba, Charles P.; Dolan, Michael J.; Arbogast, James W.; Macinga, David R.
2011-01-01
Bulk-soap-refillable dispensers are prone to extrinsic bacterial contamination, and recent studies demonstrated that approximately one in four dispensers in public restrooms are contaminated. The purpose of this study was to quantify bacterial hand contamination and transfer after use of contaminated soap under controlled laboratory and in-use conditions in a community setting. Under laboratory conditions using liquid soap experimentally contaminated with 7.51 log10 CFU/ml of Serratia marcescens, an average of 5.28 log10 CFU remained on each hand after washing, and 2.23 log10 CFU was transferred to an agar surface. In an elementary-school-based field study, Gram-negative bacteria on the hands of students and staff increased by 1.42 log10 CFU per hand (26-fold) after washing with soap from contaminated bulk-soap-refillable dispensers. In contrast, washing with soap from dispensers with sealed refills significantly reduced bacteria on hands by 0.30 log10 CFU per hand (2-fold). Additionally, the mean number of Gram-negative bacteria transferred to surfaces after washing with soap from dispensers with sealed-soap refills (0.06 log10 CFU) was significantly lower than the mean number after washing with contaminated bulk-soap-refillable dispensers (0.74 log10 CFU; P < 0.01). Finally, significantly higher levels of Gram-negative bacteria were recovered from students (2.82 log10 CFU per hand) than were recovered from staff (2.22 log10 CFU per hand) after washing with contaminated bulk soap (P < 0.01). These results demonstrate that washing with contaminated soap from bulk-soap-refillable dispensers can increase the number of opportunistic pathogens on the hands and may play a role in the transmission of bacteria in public settings. PMID:21421792
Bacterial hand contamination and transfer after use of contaminated bulk-soap-refillable dispensers.
Zapka, Carrie A; Campbell, Esther J; Maxwell, Sheri L; Gerba, Charles P; Dolan, Michael J; Arbogast, James W; Macinga, David R
2011-05-01
Bulk-soap-refillable dispensers are prone to extrinsic bacterial contamination, and recent studies demonstrated that approximately one in four dispensers in public restrooms are contaminated. The purpose of this study was to quantify bacterial hand contamination and transfer after use of contaminated soap under controlled laboratory and in-use conditions in a community setting. Under laboratory conditions using liquid soap experimentally contaminated with 7.51 log(10) CFU/ml of Serratia marcescens, an average of 5.28 log(10) CFU remained on each hand after washing, and 2.23 log(10) CFU was transferred to an agar surface. In an elementary-school-based field study, Gram-negative bacteria on the hands of students and staff increased by 1.42 log(10) CFU per hand (26-fold) after washing with soap from contaminated bulk-soap-refillable dispensers. In contrast, washing with soap from dispensers with sealed refills significantly reduced bacteria on hands by 0.30 log(10) CFU per hand (2-fold). Additionally, the mean number of Gram-negative bacteria transferred to surfaces after washing with soap from dispensers with sealed-soap refills (0.06 log(10) CFU) was significantly lower than the mean number after washing with contaminated bulk-soap-refillable dispensers (0.74 log(10) CFU; P < 0.01). Finally, significantly higher levels of Gram-negative bacteria were recovered from students (2.82 log(10) CFU per hand) than were recovered from staff (2.22 log(10) CFU per hand) after washing with contaminated bulk soap (P < 0.01). These results demonstrate that washing with contaminated soap from bulk-soap-refillable dispensers can increase the number of opportunistic pathogens on the hands and may play a role in the transmission of bacteria in public settings.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Best Statistical Distribution of flood variables for Johor River in Malaysia
NASA Astrophysics Data System (ADS)
Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.
2012-12-01
A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow
Gkana, E; Chorianopoulos, N; Grounta, A; Koutsoumanis, K; Nychas, G-J E
2017-04-01
The objective of the present study was to determine the factors affecting the transfer of foodborne pathogens from inoculated beef fillets to non-inoculated ones, through food processing surfaces. Three different levels of inoculation of beef fillets surface were prepared: a high one of approximately 10 7 CFU/cm 2 , a medium one of 10 5 CFU/cm 2 and a low one of 10 3 CFU/cm 2 , using mixed-strains of Listeria monocytogenes, or Salmonella enterica Typhimurium, or Escherichia coli O157:H7. The inoculated fillets were then placed on 3 different types of surfaces (stainless steel-SS, polyethylene-PE and wood-WD), for 1 or 15 min. Subsequently, these fillets were removed from the cutting boards and six sequential non-inoculated fillets were placed on the same surfaces for the same period of time. All non-inoculated fillets were contaminated with a progressive reduction trend of each pathogen's population level from the inoculated fillets to the sixth non-inoculated ones that got in contact with the surfaces, and regardless the initial inoculum, a reduction of approximately 2 log CFU/g between inoculated and 1st non-inoculated fillet was observed. S. Typhimurium was transferred at lower mean population (2.39 log CFU/g) to contaminated fillets than E. coli O157:H7 (2.93 log CFU/g), followed by L. monocytogenes (3.12 log CFU/g; P < 0.05). Wooden surfaces (2.77 log CFU/g) enhanced the transfer of bacteria to subsequent fillets compared to other materials (2.66 log CFU/g for SS and PE; P < 0.05). Cross-contamination between meat and surfaces is a multifactorial process strongly depended on the species, initial contamination level, kind of surface, contact time and the number of subsequent fillet, according to analysis of variance. Thus, quantifying the cross-contamination risk associated with various steps of meat processing and food establishments or households can provide a scientific basis for risk management of such products. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kumar, A; Gross, R A
2000-01-01
Engineering of the reaction medium and study of an expanded range of reaction temperatures were carried out in an effort to positively influence the outcome of Novozyme-435 (immobilized Lipase B from Candida antarctica) catalyzed epsilon-CL polymerizations. A series of solvents including acetonitrile, dioxane, tetrahydrofuran, chloroform, butyl ether, isopropyl ether, isooctane, and toluene (log P from -1.1 to 4.5) were evaluated at 70 degrees C. Statistically (ANOVA), two significant regions were observed. Solvents having log P values from -1.1 to 0.49 showed low propagation rates (< or = 30% epsilon-CL conversion in 4 h) and gave products of short chain length (Mn < or = 5200 g/mol). In contrast, solvents with log P values from 1.9 to 4.5 showed enhanced propagation rates and afforded polymers of higher molecular weight (Mn = 11,500-17,000 g/mol). Toluene, a preferred solvent for this work, was studied at epsilon-CL to toluene (wt/vol) ratios from 1:1 to 10:1. The ratio 1:2 was selected since, for polymerizations at 70 degrees C, 0.3 mL of epsilon-CL and 4 h, gave high monomer conversions and Mn values (approximately 85% and approximately 17,000 g/mol, respectively). Increasing the scale of the reaction from 0.3 to 10 mL of CL resulted in a similar isolated product yield, but the Mn increased from 17,200 to 44,800 g/mol. Toluene appeared to help stabilize Novozyme-435 so that lipase-catalyzed polymerizations could be conducted effectively at 90 degrees C. For example, within only 2 h at 90 degrees C (toluene-d8 to epsilon-CL, 5:1, approximately 1% protein), the % monomer conversion reached approximately 90%. Also, the controlled character of these polymerizations as a function of reaction temperature was evaluated.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2017-10-01
A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.
2017-08-01
Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.
Bird species and traits associated with logged and unlogged forest in Borneo.
Cleary, Daniel F R; Boyle, Timothy J B; Setyawati, Titiek; Anggraeni, Celina D; Van Loon, E Emiel; Menken, Steph B J
2007-06-01
The ecological consequences of logging have been and remain a focus of considerable debate. In this study, we assessed bird species composition within a logging concession in Central Kalimantan, Indonesian Borneo. Within the study area (approximately 196 km2) a total of 9747 individuals of 177 bird species were recorded. Our goal was to identify associations between species traits and environmental variables. This can help us to understand the causes of disturbance and predict whether species with given traits will persist under changing environmental conditions. Logging, slope position, and a number of habitat structure variables including canopy cover and liana abundance were significantly related to variation in bird composition. In addition to environmental variables, spatial variables also explained a significant amount of variation. However, environmental variables, particularly in relation to logging, were of greater importance in structuring variation in composition. Environmental change following logging appeared to have a pronounced effect on the feeding guild and size class structure but there was little evidence of an effect on restricted range or threatened species although certain threatened species were adversely affected. For example, species such as the terrestrial insectivore Argusianus argus and the hornbill Buceros rhinoceros, both of which are threatened, were rare or absent in recently logged forest. In contrast, undergrowth insectivores such as Orthotomus atrogularis and Trichastoma rostratum were abundant in recently logged forest and rare in unlogged forest. Logging appeared to have the strongest negative effect on hornbills, terrestrial insectivores, and canopy bark-gleaning insectivores while moderately affecting canopy foliage-gleaning insectivores and frugivores, raptors, and large species in general. In contrast, undergrowth insectivores responded positively to logging while most understory guilds showed little pronounced effect. Despite the high species richness of logged forest, logging may still have a negative impact on extant diversity by adversely affecting key ecological guilds. The sensitivity of hornbills in particular to logging disturbance may be expected to alter rainforest dynamics by seriously reducing the effective seed dispersal of associated tree species. However, logged forest represents an increasingly important habitat for most bird species and needs to be protected from further degradation. Biodiversity management within logging concessions should focus on maintaining large areas of unlogged forest and mitigating the adverse effects of logging on sensitive groups of species.
Finite-difference modeling of the electroseismic logging in a fluid-saturated porous formation
NASA Astrophysics Data System (ADS)
Guan, Wei; Hu, Hengshan
2008-05-01
In a fluid-saturated porous medium, an electromagnetic (EM) wavefield induces an acoustic wavefield due to the electrokinetic effect. A potential geophysical application of this effect is electroseismic (ES) logging, in which the converted acoustic wavefield is received in a fluid-filled borehole to evaluate the parameters of the porous formation around the borehole. In this paper, a finite-difference scheme is proposed to model the ES logging responses to a vertical low frequency electric dipole along the borehole axis. The EM field excited by the electric dipole is calculated separately by finite-difference first, and is considered as a distributed exciting source term in a set of extended Biot's equations for the converted acoustic wavefield in the formation. This set of equations is solved by a modified finite-difference time-domain (FDTD) algorithm that allows for the calculation of dynamic permeability so that it is not restricted to low-frequency poroelastic wave problems. The perfectly matched layer (PML) technique without splitting the fields is applied to truncate the computational region. The simulated ES logging waveforms approximately agree with those obtained by the analytical method. The FDTD algorithm applies also to acoustic logging simulation in porous formations.
Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2018-04-01
The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kay, Robert T.; Mills, Patrick C.; Dunning, Charles P.; Yeskis, Douglas J.; Ursic, James R.; Vendl, Mark
2004-01-01
The effectiveness of 28 methods used to characterize the fractured Galena-Platteville aquifer at eight sites in northern Illinois and Wisconsin is evaluated. Analysis of government databases, previous investigations, topographic maps, aerial photographs, and outcrops was essential to understanding the hydrogeology in the area to be investigated. The effectiveness of surface-geophysical methods depended on site geology. Lithologic logging provided essential information for site characterization. Cores were used for stratigraphy and geotechnical analysis. Natural-gamma logging helped identify the effect of lithology on the location of secondary- permeability features. Caliper logging identified large secondary-permeability features. Neutron logs identified trends in matrix porosity. Acoustic-televiewer logs identified numerous secondary-permeability features and their orientation. Borehole-camera logs also identified a number of secondary-permeability features. Borehole ground-penetrating radar identified lithologic and secondary-permeability features. However, the accuracy and completeness of this method is uncertain. Single-point-resistance, density, and normal resistivity logs were of limited use. Water-level and water-quality data identified flow directions and indicated the horizontal and vertical distribution of aquifer permeability and the depth of the permeable features. Temperature, spontaneous potential, and fluid-resistivity logging identified few secondary-permeability features at some sites and several features at others. Flowmeter logging was the most effective geophysical method for characterizing secondary-permeability features. Aquifer tests provided insight into the permeability distribution, identified hydraulically interconnected features, the presence of heterogeneity and anisotropy, and determined effective porosity. Aquifer heterogeneity prevented calculation of accurate hydraulic properties from some tests. Different methods, such as flowmeter logging and slug testing, occasionally produced different interpretations. Aquifer characterization improved with an increase in the number of data points, the period of data collection, and the number of methods used.
Klein, M; Birch, D G
2009-12-01
To determine whether the Diagnosys full-field stimulus threshold (D-FST) is a valid, sensitive and repeatable psychophysical method of measuring and following visual function in low-vision subjects. Fifty-three affected eyes of 42 subjects with severe retinal degenerative diseases (RDDs) were tested with achromatic stimuli on the D-FST. Included were subjects who were either unable to perform a static perimetric field or had non-detectable or sub-microvolt electroretinograms (ERGs). A subset of 21 eyes of 17 subjects was tested on both the D-FST and the FST2, a previous established full-field threshold test. Seven eyes of 7 normal control subjects were tested on both the D-FST and the FST2. Results for the two methods were compared with the Bland-Altman test. On the D-FST, a threshold could successfully be determined for 13 of 14 eyes with light perception (LP) only (median 0.9 +/- 1.4 log cd/m2), and all eyes determined to be counting fingers (CF; median 0.3 +/- 1.8 log cd/m2). The median full-field threshold for the normal controls was -4.3 +/- 0.6 log cd/m2 on the D-FST and -4.8 +/- 0.9 log cd/m2 on the FST2. The D-FST offers a commercially available method with a robust psychophysical algorithm and is a useful tool for following visual function in low vision subjects.
Fractured-aquifer hydrogeology from geophysical logs; the passaic formation, New Jersey
Morin, R.H.; Carleton, G.B.; Poirier, S.
1997-01-01
The Passaic Formation consists of gradational sequences of mudstone, siltstone, and sandstone, and is a principal aquifer in central New Jersey. Ground-water flow is primarily controlled by fractures interspersed throughout these sedimentary rocks and characterizing these fractures in terms of type, orientation, spatial distribution, frequency, and transmissivity is fundamental towards understanding local fluid-transport processes. To obtain this information, a comprehensive suite of geophysical logs was collected in 10 wells roughly 46 m in depth and located within a .05 km2 area in Hopewell Township, New Jersey. A seemingly complex, heterogeneous network of fractures identified with an acoustic televiewer was statistically reduced to two principal subsets corresponding to two distinct fracture types: (1) bedding-plane partings and (2) high-angle fractures. Bedding-plane partings are the most numerous and have an average strike of N84??W and dip of 20??N. The high-angle fractures are oriented subparallel to these features, with an average strike of N79??E and dip of 71??S, making the two fracture types roughly orthogonal. Their intersections form linear features that also retain this approximately east-west strike. Inspection of fluid temperature and conductance logs in conjunction with flowmeter measurements obtained during pumping allows the transmissive fractures to be distinguished from the general fracture population. These results show that, within the resolution capabilities of the logging tools, approximately 51 (or 18 percent) of the 280 total fractures are water producing. The bedding-plane partings exhibit transmissivities that average roughly 5 m2/day and that generally diminish in magnitude and frequency with depth. The high-angle fractures have average transmissivities that are about half those of the bedding-plane partings and show no apparent dependence upon depth. The geophysical logging results allow us to infer a distinct hydrogeologic structure within this aquifer that is defined by fracture type and orientation. Fluid flow near the surface is controlled primarily by the highly transmissive, subhorizontal bedding-plane partings. As depth increases, the high-angle fractures apparently become more dominant hydrologically.The Passaic Formation consists of gradational sequences of mudstone, siltstone, and sandstone, and is a principal aquifer in central New Jersey. Ground-water flow is primarily controlled by fractures interspersed throughout these sedimentary rocks and characterizing these fractures in terms of type, orientation, spatial distribution, frequency, and transmissivity is fundamental towards understanding local fluid-transport processes. To obtain this information, a comprehensive suite of geophysical logs was collected in 10 wells roughly 46 m in depth and located within a .05 km2 area in Hopewell Township, New Jersey. A seemingly complex, heterogeneous network of fractures identified with an acoustic televiewer was statistically reduced to two principal subsets corresponding to two distinct fracture types: (1) bedding-plane partings and (2) high-angle fractures. Bedding-plane partings are the most numerous and have an average strike of N84?? W and dip of 20?? N. The high-angle fractures are oriented subparallel to these features, with an average strike of N79?? E and dip of 71?? S, making the two fracture types roughly orthogonal. Their intersections form linear features that also retain this approximately east-west strike. Inspection of fluid temperature and conductance logs in conjunction with flowmeter measurements obtained during pumping allows the transmissive fractures to be distinguished from the general fracture population. These results show that, within the resolution capabilities of the logging tools, approximately 51 (or 18 percent) of the 280 total fractures are water producing. The bedding-plane partings exhibit transmissivities that average roughly 5 m2/day and that generally dimi
USDA-ARS?s Scientific Manuscript database
Using linear regression models, we studied the main and two-way interaction effects of the predictor variables gender, age, BMI, and 64 folate/vitamin B-12/homocysteine/lipid/cholesterol-related single nucleotide polymorphisms (SNP) on log-transformed plasma homocysteine normalized by red blood cell...
Characterization of complexes of nucleoside-5'-phosphorothioate analogues with zinc ions.
Sayer, Alon Haim; Itzhakov, Yehudit; Stern, Noa; Nadel, Yael; Fischer, Bilha
2013-10-07
On the basis of the high affinity of Zn(2+) to sulfur and imidazole, we targeted nucleotides such as GDP-β-S, ADP-β-S, and AP3(β-S)A, as potential biocompatible Zn(2+)-chelators. The thiophosphate moiety enhanced the stability of the Zn(2+)-nucleotide complex by about 0.7 log units. ATP-α,β-CH2-γ-S formed the most stable Zn(2+)-complex studied here, log K 6.50, being ~0.8 and ~1.1 log units more stable than ATP-γ-S-Zn(2+) and ATP-Zn(2+) complexes, and was the major species, 84%, under physiological pH. Guanine nucleotides Zn(2+) complexes were more stable by 0.3-0.4 log units than the corresponding adenine nucleotide complexes. Likewise, AP3(β-S)A-zinc complex was ~0.5 log units more stable than AP3A complex. (1)H- and (31)P NMR monitored Zn(2+) titration showed that Zn(2+) coordinates with the purine nucleotide N7-nitrogen atom, the terminal phosphate, and the adjacent phosphate. In conclusion, replacement of a terminal phosphate by a thiophosphate group resulted in decrease of the acidity of the phosphate moiety by approximately one log unit, and increase of stability of Zn(2+)-complexes of the latter analogues by up to 0.7 log units. A terminal phosphorothioate contributed more to the stability of nucleotide-Zn(2+) complexes than a bridging phosphorothioate.
Ranade, Manisha K; Lynch, Bart D; Li, Jonathan G; Dempsey, James F
2006-01-01
We have developed an electronic portal imaging device (EPID) employing a fast scintillator and a high-speed camera. The device is designed to accurately and independently characterize the fluence delivered by a linear accelerator during intensity modulated radiation therapy (IMRT) with either step-and-shoot or dynamic multileaf collimator (MLC) delivery. Our aim is to accurately obtain the beam shape and fluence of all segments delivered during IMRT, in order to study the nature of discrepancies between the plan and the delivered doses. A commercial high-speed camera was combined with a terbium-doped gadolinium-oxy-sulfide (Gd2O2S:Tb) scintillator to form an EPID for the unaliased capture of two-dimensional fluence distributions of each beam in an IMRT delivery. The high speed EPID was synchronized to the accelerator pulse-forming network and gated to capture every possible pulse emitted from the accelerator, with an approximate frame rate of 360 frames-per-second (fps). A 62-segment beam from a head-and-neck IMRT treatment plan requiring 68 s to deliver was recorded with our high speed EPID producing approximately 6 Gbytes of imaging data. The EPID data were compared with the MLC instruction files and the MLC controller log files. The frames were binned to provide a frame rate of 72 fps with a signal-to-noise ratio that was sufficient to resolve leaf positions and segment fluence. The fractional fluence from the log files and EPID data agreed well. An ambiguity in the motion of the MLC during beam on was resolved. The log files reported leaf motions at the end of 33 of the 42 segments, while the EPID observed leaf motions in only 7 of the 42 segments. The static IMRT segment shapes observed by the high speed EPID were in good agreement with the shapes reported in the log files. The leaf motions observed during beam-on for step-and-shoot delivery were not temporally resolved by the log files.
Numerical Modeling of Electroacoustic Logging Including Joule Heating
NASA Astrophysics Data System (ADS)
Plyushchenkov, Boris D.; Nikitin, Anatoly A.; Turchaninov, Victor I.
It is well known that electromagnetic field excites acoustic wave in a porous elastic medium saturated with fluid electrolyte due to electrokinetic conversion effect. Pride's equations describing this process are written in isothermal approximation. Update of these equations, which allows to take influence of Joule heating on acoustic waves propagation into account, is proposed here. This update includes terms describing the initiation of additional acoustic waves excited by thermoelastic stresses and the heat conduction equation with right side defined by Joule heating. Results of numerical modeling of several problems of propagation of acoustic waves excited by an electric field source with and without consideration of Joule heating effect in their statements are presented. From these results, it follows that influence of Joule heating should be taken into account at the numerical simulation of electroacoustic logging and at the interpretation of its log data.
Analysis of DNS Cache Effects on Query Distribution
2013-01-01
This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally. PMID:24396313
Analysis of DNS cache effects on query distribution.
Wang, Zheng
2013-01-01
This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally.
High Pressure Inactivation of HAV within Mussels
USDA-ARS?s Scientific Manuscript database
The potential of hepatitis A virus (HAV) to be inactivated within Mediterranean mussels (Mytilus galloprovincialis) and blue mussels (Mytilus edulis) by high pressure processing was evaluated. HAV was bioaccumulated within mussels to approximately 6-log10 PFU by exposure of mussels to HAV-contamina...
Tejería, María Emilia; Giglio, Javier; Dematteis, Silvia; Rey, Ana
2017-09-01
Assessment of the presence of estrogen receptors in breast cancer is crucial for treatment planning. With the objective to develop a potential agent for estrogen receptors imaging, we present the development and characterization of a 99m Tc-tricarbonyl-labelled estradiol derivative. Using ethinylestradiol as starting material, an estradiol derivative bearing a 1,4-disubstituted 1,2,3-triazole-containing tridentate ligand system was synthesized by "Click Chemistry" and fully characterized. Labelling with high yield and radiochemical purity was achieved through the formation of a 99m Tc-tricarbonyl complex. The radiolabelled compound was stable, exhibited moderate binding to plasma protein (approximately 33%) and lipophilicity in the adequate range (logP 1.3 ± 0.1 at pH 7.4). Studies in MCF7 showed promising uptake values (approximately 2%). However, more than 50% of the activity is quickly released from the cell. Biodistribution experiments in normal rats confirmed the expected "in vivo" stability of the radiotracer but showed very high gastrointestinal and liver activity, which is inconvenient for in vivo applications. Taking into consideration the well-documented influence of the chelating system in the physicochemical and biological behaviour of technetium-labelled small biomolecules, research will be continued using the same pharmacophore but different complexation modalities of technetium. Copyright © 2017 John Wiley & Sons, Ltd.
N-body dark matter haloes with simple hierarchical histories
NASA Astrophysics Data System (ADS)
Jiang, Lilian; Helly, John C.; Cole, Shaun; Frenk, Carlos S.
2014-05-01
We present a new algorithm which groups the subhaloes found in cosmological N-body simulations by structure finders such as SUBFIND into dark matter haloes whose formation histories are strictly hierarchical. One advantage of these `Dhaloes' over the commonly used friends-of-friends (FoF) haloes is that they retain their individual identity in the cases when FoF haloes are artificially merged by tenuous bridges of particles or by an overlap of their outer diffuse haloes. Dhaloes are thus well suited for modelling galaxy formation and their merger trees form the basis of the Durham semi-analytic galaxy formation model, GALFORM. Applying the Dhalo construction to the Λ cold dark matter Millennium II Simulation, we find that approximately 90 per cent of Dhaloes have a one-to-one, bijective match with a corresponding FoF halo. The remaining 10 per cent are typically secondary components of large FoF haloes. Although the mass functions of both types of haloes are similar, the mass of Dhaloes correlates much more tightly with the virial mass, M200, than FoF haloes. Approximately 80 per cent of FoF and bijective and non-bijective Dhaloes are relaxed according to standard criteria. For these relaxed haloes, all three types have similar concentration-M200 relations and, at fixed mass, the concentration distributions are described accurately by log-normal distributions.
Using NDVI to assess vegetative land cover change in central Puget Sound.
Morawitz, Dana F; Blewett, Tina M; Cohen, Alex; Alberti, Marina
2006-03-01
We used the Normalized Difference Vegetation Index (NDVI) in the rapidly growing Puget Sound region over three 5-year time blocks between 1986-1999 at three spatial scales in 42 Watershed Administrative Units (WAUs) to assess changes in the amounts and patterns of green vegetation. On average, approximately 20% of the area in each WAU experienced significant NDVI change over each 5-year time block. Cumulative NDVI change over 15 years (summing change over each 5-year time block) was an average of approximately 60% of each WAU, but was as high as 100% in some. At the regional scale, seasonal weather patterns and green-up from logging were the primary drivers of observed increases in NDVI values. At the WAU scale, anthropogenic factors were important drivers of both positive and negative NDVI change. For example, population density was highly correlated with negative NDVI change over 15 years (r = 0.66, P < 0.01), as was road density (r = 0.71, P < 0.01). At the smallest scale (within 3 case study WAUs) land use differences such as preserving versus harvesting forest lands drove vegetation change. We conclude that large areas within most watersheds are continually and heavily impacted by the high levels of human use and development over short time periods. Our results indicate that varying patterns and processes can be detected at multiple scales using changes in NDVIa values.
Maricq, M Matti; Chase, Richard E; Xu, Ning; Laing, Paul M
2002-01-15
Wind tunnel measurements and direct tailpipe particulate matter (PM) sampling are utilized to examine how the combination of oxidation catalyst and fuel sulfur content affects the nature and quantity of PM emissions from the exhaust of a light duty diesel truck. When low sulfur fuel (4 ppm) is used, or when high sulfur (350 ppm)fuel is employed without an active catalyst present, a single log-normal distribution of exhaust particles is observed with a number mean diameter in the range of 70-83 nm. In the absence of the oxidation catalyst, the high sulfur level has at most a modest effect on particle emissions (<50%) and a minor effect on particle size (<5%). In combination with the active oxidation catalyst tested, high sulfur fuel can lead to a second, nanoparticle, mode, which appears at approximately 20 nm during high speed operation (70 mph), but is not present at low speed (40 mph). A thermodenuder significantly reduces the nanoparticle mode when set to temperatures above approximately 200 degrees C, suggesting that these particles are semivolatile in nature. Because they are observed only when the catalyst is present and the sulfur level is high, this mode likely originates from the nucleation of sulfates formed over the catalyst, although the composition may also include hydrocarbons.
Singh, Minerva; Evans, Damian; Coomes, David A.; Friess, Daniel A.; Suy Tan, Boun; Samean Nin, Chan
2016-01-01
This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests. PMID:27176218
NASA Astrophysics Data System (ADS)
Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra
2016-04-01
Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.
Singh, Minerva; Evans, Damian; Coomes, David A; Friess, Daniel A; Suy Tan, Boun; Samean Nin, Chan
2016-01-01
This research examines the role of canopy cover in influencing above ground biomass (AGB) dynamics of an open canopied forest and evaluates the efficacy of individual-based and plot-scale height metrics in predicting AGB variation in the tropical forests of Angkor Thom, Cambodia. The AGB was modeled by including canopy cover from aerial imagery alongside with the two different canopy vertical height metrics derived from LiDAR; the plot average of maximum tree height (Max_CH) of individual trees, and the top of the canopy height (TCH). Two different statistical approaches, log-log ordinary least squares (OLS) and support vector regression (SVR), were used to model AGB variation in the study area. Ten different AGB models were developed using different combinations of airborne predictor variables. It was discovered that the inclusion of canopy cover estimates considerably improved the performance of AGB models for our study area. The most robust model was log-log OLS model comprising of canopy cover only (r = 0.87; RMSE = 42.8 Mg/ha). Other models that approximated field AGB closely included both Max_CH and canopy cover (r = 0.86, RMSE = 44.2 Mg/ha for SVR; and, r = 0.84, RMSE = 47.7 Mg/ha for log-log OLS). Hence, canopy cover should be included when modeling the AGB of open-canopied tropical forests.
Korinth, Gintautas; Wellner, Tanja; Schaller, Karl Heinz; Drexler, Hans
2012-11-23
Aqueous amphiphilic compounds may exhibit enhanced skin penetration compared with neat compounds. Conventional models do not predict this percutaneous penetration behaviour. We investigated the potential of the octanol-water partition coefficient (logP) to predict dermal fluxes for eight compounds applied neat and as 50% aqueous solutions in diffusion cell experiments using human skin. Data for seven other compounds were accessed from literature. In total, seven glycol ethers, three alcohols, two glycols, and three other chemicals were considered. Of these 15 compounds, 10 penetrated faster through the skin as aqueous solutions than as neat compounds. The other five compounds exhibited larger fluxes as neat applications. For 13 of the 15 compounds, a consistent relationship was identified between the percutaneous penetration behaviour and the logP. Compared with the neat applications, positive logP were associated with larger fluxes for eight of the diluted compounds, and negative logP were associated with smaller fluxes for five of the diluted compounds. Our study demonstrates that decreases or enhancements in dermal penetration upon aqueous dilution can be predicted for many compounds from the sign of logP (i.e., positive or negative). This approach may be suitable as a first approximation in risk assessments of dermal exposure. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
NASA Astrophysics Data System (ADS)
Hamada, Y.; Yamada, Y.; Sanada, Y.; Nakamura, Y.; Kido, Y. N.; Moe, K.
2017-12-01
Gas hydrates bearing layer can be normally identified by a basement simulating reflector (BSR) or well logging because of their high acoustic- and electric impedance compared to the surrounding formation. These characteristics of the gas hydrate can also represent contrast of in-situ formation strength. We here attempt to describe gas hydrate bearing layers based on the equivalent strength (EST). The Indian National Gas Hydrate Program (NGHP) Expedition 02 was executed 2015 off the eastern margin of the Indian Peninsula to investigate distribution and occurrence of gas hydrates. From 25 drill sites, downhole logging data, cored samples, and drilling performance data were collected. Recorded drilling performance data was converted to the EST, which is a developed mechanical strength calculated only by drilling parameters (top drive torque, rotation per minute , rate of penetration , and drill bit diameter). At a representative site, site 23, the EST shows constant trend of 5 to 10 MPa, with some positive peaks at 0 - 270 mbsf interval, and sudden increase up to 50 MPa above BSR depth (270 - 290 mbsf). Below the BSR, the EST stays at 5-10 MPa down to the bottom of the hole (378 mbsf). Comparison of the EST with logging data and core sample description suggests that the depth profiles of the EST reflect formation lithology and gas hydrate content: the EST increase in the sand-rich layer and the gas hydrate bearing zone. Especially in the gas hydrate zone, the EST curve indicates approximately the same trend with that of P-wave velocity and resistivity measured by downhole logging. Cross plot of the increment of the EST and resistivity revealed the relation between them is roughly logarithmic, indicating the increase and decrease of the EST strongly depend on the saturation factor of gas hydrate. These results suggest that the EST, proxy of in-situ formation strength, can be an indicator of existence and amount of the gas-hydrate layer. Although the EST was calculated after drilling utilizing recorded surface drilling parameter in this study, the EST can be acquired during drilling by using real-time drilling parameters. In addition, the EST only requires drilling performance parameters without any additional tools or measurements, making it a simplified and economical tool for the exploration of gas hydrates.
Log polar image sensor in CMOS technology
NASA Astrophysics Data System (ADS)
Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou
1996-08-01
We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
Proliferation and apoptosis in malignant and normal cells in B-cell non-Hodgkin's lymphomas.
Stokke, T.; Holte, H.; Smedshammer, L.; Smeland, E. B.; Kaalhus, O.; Steen, H. B.
1998-01-01
We have examined apoptosis and proliferation in lymph node cell suspensions from patients with B-cell non-Hodgkin's lymphoma using flow cytometry. A method was developed which allowed estimation of the fractions of apoptotic cells and cells in the S-phase of the cell cycle simultaneously with tumour-characteristic light chain expression. Analysis of the tumour S-phase fraction and the tumour apoptotic fraction in lymph node cell suspensions from 95 B-cell non-Hodgkin's lymphoma (NHL) patients revealed a non-normal distribution for both parameters. The median fraction of apoptotic tumour cells was 1.1% (25 percentiles 0.5%, 2.7%). In the same samples, the median fraction of apoptotic normal cells was higher than for the tumour cells (1.9%; 25 percentiles 0.7%, 4.0%; P = 0.03). The median fraction of tumour cells in S-phase was 1.4% (25 percentiles 0.8%, 4.8%), the median fraction of normal cells in S-phase was significantly lower than for the tumour cells (1.0%; 25 percentiles 0.6%, 1.9%; P = 0.004). When the number of cases was plotted against the logarithm of the S-phase fraction of the tumour cells, a distribution with two Gaussian peaks was needed to fit the data. One peak was centred around an S-phase fraction of 0.9%; the other was centred around 7%. These peaks were separated by a valley at approximately 3%, indicating that the S-phase fraction in NHL can be classified as 'low' (< 3%) or 'high' (> 3%), independent of the median S-phase fraction. The apoptotic fractions were log-normally distributed. The median apoptotic fraction was higher (1.5%) in the 'high' S-phase group than in the 'low' S-phase group (0.8%; P = 0.02). However, there was no significant correlation between the two parameters (P > 0.05). PMID:9667654
Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean
NASA Astrophysics Data System (ADS)
Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim
2006-07-01
A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Flame surface statistics of constant-pressure turbulent expanding premixed flames
NASA Astrophysics Data System (ADS)
Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.
2014-04-01
In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.
Empirical study of the tails of mutual fund size
NASA Astrophysics Data System (ADS)
Schwarzkopf, Yonathan; Farmer, J. Doyne
2010-06-01
The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.
1994-06-01
The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less
Radium-226 content of beverages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiefer, J.
Radium contents of commercially obtained beer, wine, milk and mineral waters were measured. All distributions were log-normal with the following geometrical mean values: beer: 2.1 X 10(-2) Bq L-1; wine: 3.4 X 10(-2) Bq L-1; milk: 3 X 10(-3) Bq L-1; normal mineral water: 4.3 X 10(-2) L-1; medical mineral water: 9.4 X 10(-2) Bq L-1.
Non-Rayleigh Sea Clutter: Properties and Detection of Targets
1976-06-25
subject should consult Guinard and Daley [7], which provides an overview of the theory and references all the I______.... important work. 6 * .-- - - S...results for scattering from slightly rough surfaces and composite surfaces obtained by Rice [1], Wright [2,3], Valenzuela [4-6], Guinard and Daley [7], and...for vertical polarization. In 1970, Trunk and George [10] considered the log-normal and contaminated-normal descriptions of sea clutter and calculated
Investigation into the performance of different models for predicting stutter.
Bright, Jo-Anne; Curran, James M; Buckleton, John S
2013-07-01
In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
LETTERS AND COMMENTS: Note on the 'log formulae' for pendulum motion valid for any amplitude
NASA Astrophysics Data System (ADS)
Qing-Xin, Yuan; Pei, Ding
2010-01-01
In this note, we present an improved approximation to the solution of Lima (2008 Eur. J. Phys. 29 1091), which decreases the maximum relative error from 0.6% to 0.084% in evaluating the exact pendulum period.
Inactivation of Mycobacterium paratuberculosis by Pulsed Electric Fields
Rowan, Neil J.; MacGregor, Scott J.; Anderson, John G.; Cameron, Douglas; Farish, Owen
2001-01-01
The influence of treatment temperature and pulsed electric fields (PEF) on the viability of Mycobacterium paratuberculosis cells suspended in 0.1% (wt/vol) peptone water and in sterilized cow's milk was assessed by direct viable counts and by transmission electron microscopy (TEM). PEF treatment at 50°C (2,500 pulses at 30 kV/cm) reduced the level of viable M. paratuberculosis cells by approximately 5.3 and 5.9 log10 CFU/ml in 0.1% peptone water and in cow's milk, respectively, while PEF treatment of M. paratuberculosis at lower temperatures resulted in less lethality. Heating alone at 50°C for 25 min or at 72°C for 25 s (extended high-temperature, short-time pasteurization) resulted in reductions of M. paratuberculosis of approximately 0.01 and 2.4 log10 CFU/ml, respectively. TEM studies revealed that exposure to PEF treatment resulted in substantial damage at the cellular level to M. paratuberculosis. PMID:11375202
Towards a PTAS for the generalized TSP in grid clusters
NASA Astrophysics Data System (ADS)
Khachay, Michael; Neznakhina, Katherine
2016-10-01
The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).
Al-Qadiri, Hamzah; Sablani, Shyam S; Ovissipour, Mahmoudreza; Al-Alami, Nivin; Govindan, Byju; Rasco, Barbara
2015-04-01
This study investigated the growth and survival of three foodborne pathogens (Clostridium perfringens, Campylobacter jejuni, and Listeria monocytogenes) in beef (7% fat) and nutrient broth under different oxygen levels. Samples were tested under anoxic (<0.5%), microoxic (6 to 8%), and oxic (20%) conditions during storage at 7 °C for 14 days and at 22 °C for 5 days. Two initial inoculum concentrations were used (1 and 2 log CFU per g of beef or per ml of broth). The results show that C. perfringens could grow in beef at 22 °C, with an increase of approximately 5 log under anoxic conditions and a 1-log increase under microoxic conditions. However, C. perfringens could not survive in beef held at 7 °C under microoxic and oxic storage conditions after 14 days. In an anoxic environment, C. perfringens survived in beef samples held at 7 °C, with a 1-log reduction. A cell decline was observed at 2 log under these conditions, with no surviving cells at the 1-log level. However, the results show that C. jejuni under microoxic conditions survived with declining cell numbers. Significant increases in L. monocytogenes (5 to 7 log) were observed in beef held at 22 °C for 5 days, with the lowest levels recovered under anoxic conditions. L. monocytogenes in refrigerated storage increased by a factor of 2 to 4 log. It showed the greatest growth under oxic conditions, with significant growth under anoxic conditions. These findings can be used to enhance food safety in vacuum-packed and modified atmosphere-packaged food products.
Goff, J.A.; Holliger, K.
1999-01-01
The main borehole of the German Continental Deep Drilling Program (KTB) extends over 9000 m into a crystalline upper crust consisting primarily of interlayered gneiss and metabasite. We present a joint analysis of the velocity and lithology logs in an effort to extract the lithology component of the velocity log. Covariance analysis of lithology log, approximated as a binary series, indicates that it may originate from the superposition of two Brownian stochastic processes (fractal dimension 1.5) with characteristic scales of ???2800 m and ???150 m, respectively. Covariance analysis of the velocity fluctuations provides evidence for the superposition of four stochastic process with distinct characteristic scales. The largest two scales are identical to those derived from the lithology, confirming that these scales of velocity heterogeneity are caused by lithology variations. The third characteristic scale, ???20 m, also a Brownian process, is probably related to fracturing based on correlation with the resistivity log. The superposition of these three Brownian processes closely mimics the commonly observed 1/k decay (fractal dimension 2.0) of the velocity power spectrum. The smallest scale process (characteristic scale ???1.7 m) requires a low fractal dimension, ???1.0, and accounts for ???60% of the total rms velocity variation. A comparison of successive logs from 6900-7140 m depth indicates that such variations are not repeatable and thus probably do not represent true velocity variations in the crust. The results of this study resolve disparity between the differing published estimates of seismic heterogeneity based on the KTB sonic logs, and bridge the gap between estimates of crustal heterogeneity from geologic maps and borehole logs. Copyright 1999 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Husson, V. S.; Long, J. L.; Pearlman, M.
2001-12-01
By the end of 2000, 94% of ILRS stations had completed station and site information forms (i.e. site logs). These forms contain six types of information. These six categories include site identifiers, contact information, approximate coordinates, system configuration history, system ranging capabilities, and local survey ties. The ILRS Central Bureau, in conjunction with the ILRS Networks and Engineering Working Group, has developed procedures to quality control site log contents. Part of this verification entails data integrity checks of local site ties and is the primary focus of this paper. Local survey ties are critical to the combination of space geodetic network coordinate solutions (i.e. GPS, SLR, VLBI, DORIS) of the International Terrestrial Reference Frame (ITRF). Approximately 90% of active SLR sites are collocated with at least one other space geodetic technique. The process used to verify these SLR ties, at collocated sites, is identical to the approach used in ITRF2000. Local vectors (X, Y, Z) from each ILRS site log are differenced from its corresponding ITRF2000 position vectors (i.e. no transformations). These X, Y, and Z deltas are converted into North, East, and Up. Any deltas, in any component, larger than 5 millimeter is flagged for investigation. In the absence of ITRF2000 SLR positions, CSR positions were used. To further enhance this comparison and to fill gaps in information, local ties contained in site logs from the other space geodetic services (i.e. IGS, IVS, IDS) were used in addition to ITRF2000 ties. Case studies of two collocated sites (McDonald/Ft. Davis and Hartebeeshtoek) will be explored in-depth. Recommendations on how local site surveys should be conducted and how this information should be managed will also be presented.
Strategic Methodologies in Public Health Cost Analyses.
Whittington, Melanie; Atherly, Adam; VanRaemdonck, Lisa; Lampe, Sarah
The National Research Agenda for Public Health Services and Systems Research states the need for research to determine the cost of delivering public health services in order to assist the public health system in communicating financial needs to decision makers, partners, and health reform leaders. The objective of this analysis is to compare 2 cost estimation methodologies, public health manager estimates of employee time spent and activity logs completed by public health workers, to understand to what degree manager surveys could be used in lieu of more time-consuming and burdensome activity logs. Employees recorded their time spent on communicable disease surveillance for a 2-week period using an activity log. Managers then estimated time spent by each employee on a manager survey. Robust and ordinary least squares regression was used to measure the agreement between the time estimated by the manager and the time recorded by the employee. The 2 outcomes for this study included time recorded by the employee on the activity log and time estimated by the manager on the manager survey. This study was conducted in local health departments in Colorado. Forty-one Colorado local health departments (82%) agreed to participate. Seven of the 8 models showed that managers underestimate their employees' time, especially for activities on which an employee spent little time. Manager surveys can best estimate time for time-intensive activities, such as total time spent on a core service or broad public health activity, and yet are less precise when estimating discrete activities. When Public Health Services and Systems Research researchers and health departments are conducting studies to determine the cost of public health services, there are many situations in which managers can closely approximate the time required and produce a relatively precise approximation of cost without as much time investment by practitioners.
Lianou, Alexandra; Samelis, John
2014-08-01
Recent research has shown that mild milk thermization treatments routinely used in traditional Greek cheese production are efficient to inactivate Listeria monocytogenes and other pathogenic or undesirable bacteria, but they also inactivate a great part of the autochthonous antagonistic microbiota of raw milk. Therefore, in this study, the antilisterial activity of raw or thermized (63°C, 30 s) milk in the presence or absence of Lactococcus lactis subsp. cremoris M104, a wild, novel, nisin A-producing (Nis-A+) raw milk isolate, was assessed. Bulk milk samples were taken from a local cheese plant before or after thermization and were inoculated with a five-strain cocktail of L. monocytogenes (approximately 4 log CFU/ml) or with the cocktail, as above, plus the Nis-A+ strain (approximately 6 log CFU/ml) as a bioprotective culture. Heat-sterilized (121°C, 5 min) raw milk inoculated with L. monocytogenes was used as a control treatment. All milk samples were incubated at 37°C for 6 h and then at 18°C for an additional 66 h. L. monocytogenes grew abundantly (>8 log CFU/ml) in heat-sterilized milk, whereas its growth was completely inhibited in all raw milk samples. Conversely, in thermized milk, L. monocytogenes increased by 2 log CFU/ml in the absence of strain M104, whereas its growth was completely inhibited in the presence of strain M104. Furthermore, nisin activity was detected only in milk samples inoculated with strain M104. Thus, postthermal supplementation of thermized bulk milk with bioprotective L. lactis subsp. cremoris cultures replaces the natural antilisterial activity of raw milk reduced by thermization.
A Fast Algorithm for the Convolution of Functions with Compact Support Using Fourier Extensions
Xu, Kuan; Austin, Anthony P.; Wei, Ke
2017-12-21
In this paper, we present a new algorithm for computing the convolution of two compactly supported functions. The algorithm approximates the functions to be convolved using Fourier extensions and then uses the fast Fourier transform to efficiently compute Fourier extension approximations to the pieces of the result. Finally, the complexity of the algorithm is O(N(log N) 2), where N is the number of degrees of freedom used in each of the Fourier extensions.
A Fast Algorithm for the Convolution of Functions with Compact Support Using Fourier Extensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Kuan; Austin, Anthony P.; Wei, Ke
In this paper, we present a new algorithm for computing the convolution of two compactly supported functions. The algorithm approximates the functions to be convolved using Fourier extensions and then uses the fast Fourier transform to efficiently compute Fourier extension approximations to the pieces of the result. Finally, the complexity of the algorithm is O(N(log N) 2), where N is the number of degrees of freedom used in each of the Fourier extensions.
Diversity of microbiota found in coffee processing wastewater treatment plant.
Pires, Josiane Ferreira; Cardoso, Larissa de Souza; Schwan, Rosane Freitas; Silva, Cristina Ferreira
2017-11-13
Cultivable microbiota presents in a coffee semi-dry processing wastewater treatment plant (WTP) was identified. Thirty-two operational taxonomic units (OTUs) were detected, these being 16 bacteria, 11 yeasts and 4 filamentous fungi. Bacteria dominated the microbial population (11.61 log CFU mL - 1 ), and presented the highest total diversity index when observed in the WTP aerobic stage (Shannon = 1.94 and Simpson = 0.81). The most frequent bacterial species were Enterobacter asburiae, Sphingobacterium griseoflavum, Chryseobacterium bovis, Serratia marcescens, Corynebacterium flavescens, Acetobacter orientalis and Acetobacter indonesiensis; these showed the largest total bacteria populations in the WTP, with approximately 10 log CFU mL - 1 . Yeasts were present at 7 log CFU mL - 1 of viable cells, with Hanseniaspora uvarum, Wickerhamomyces anomalus, Torulaspora delbrueckii, Saturnispora gosingensis, and Kazachstania gamospora being the prevalent species. Filamentous fungi were found at 6 log CFU mL - 1 , with Fusarium oxysporum the most populous species. The identified species have the potential to act as a biological treatment in the WTP, and the application of them for this purpose must be better studied.
Inactivation of indigenous coliform bacteria in unfiltered surface water by ultraviolet light.
Cantwell, Raymond E; Hofmann, Ron
2008-05-01
This study examined the potential for naturally occurring particles to protect indigenous coliform from ultraviolet (UV) disinfection in four surface waters. Tailing in the UV dose-response curve of the bacteria was observed in 3 of the 4 water samples after 1.3-2.6-log of log-linear inactivation, implying particle-related protection. The impact of particles was confirmed by comparing coliform UV inactivation data for parallel filtered (11 microm pore-size nylon filters) and unfiltered surface water. In samples from the Grand River (UVT: 65%/cm; 5.4 nephelometric turbidity units (NTU)) and the Rideau Canal (UVT: 60%/cm; 0.84 NTU), a limit of approximately 2.5 log inactivation was achieved in the unfiltered samples for a UV dose of 20 mJ/cm2 while both the filtered samples exhibited >3.4-log inactivation of indigenous coliform bacteria. The results suggest that particles as small as 11 microm, naturally found in surface water with low turbidity (<3NTU), are able to harbor indigenous coliform bacteria and offer protection from low-pressure UV light.
Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center, capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Chen, G; Shi, L; Cai, L; Lin, W; Huang, H; Liang, J; Li, L; Lin, L; Tang, K; Chen, L; Lu, J; Bi, Y; Wang, W; Ning, G; Wen, J
2017-02-01
Insulin resistance and β-cell function are different between the young and elderly diabetes individuals, which are not well elaborated in the nondiabetic persons. The aims of this study were to compare insulin resistance and β-cell function between young and old adults from normal glucose tolerance (NGT) to prediabetes [which was subdivided into isolated impaired fasting glucose (i-IFG), isolated impaired glucose tolerance (i-IGT), and a combination of both (IFG/IGT)], and compare the prevalence of diabetes mellitus (DM) in the above prediabetes subgroups between different age groups after 3 years. A total of 1 374 subjects aged below 40 or above 60 years old with NGT or prediabetes were finally included in this study. Insulin resistance and β-cell function from homeostasis model assessment (HOMA) and interactive, 24-variable homeostatic model of assessment (iHOMA2) were compared between different age groups. The rate of transition to diabetes between different age groups in all pre-diabetes subgroups was also compared. Compared with the old groups, young i-IFG and IFG/IGT groups exhibit higher log HOMA-IR and log HOMA2-S, whereas the young i-IGT groups experienced comparable log HOMA-IR and log HOMA2-S when compared with old i-IFG and IFG/IGT groups. Three prediabetes subgroups all had similar log HOMA-B and log HOMA2-B between different age groups. In addition, the prevalence of diabetes in young i-IFG was statistically higher than that in old i-IFG after 3 years. Age is negatively related to log HOMA2-B in both age groups. Considering an age-related deterioration of β-cell function, young i-IFG, young i-IGT, and young IFG/IGT all suffered a greater impairment in insulin secretion than the old groups. Young i-IFG and IFG/IGT have more severe insulin resistance than the old groups. In addition, young i-IFG characterized with a higher incidence of DM than the old i-IFG. These disparities highlight that the prevention to slow progression from prediabetes to type 2 diabetes should be additionally focused in young prediabetes individuals, especially young i-IFG. © Georg Thieme Verlag KG Stuttgart · New York.
Approximate matching of regular expressions.
Myers, E W; Miller, W
1989-01-01
Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Evaluation of estimation methods for organic carbon normalized sorption coefficients
Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.
1997-01-01
A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Hakk, Heldur; Shappell, Nancy W; Lupton, Sara J; Shelver, Weilin L; Fanaselle, Wendy; Oryang, David; Yeung, Chi Yuen; Hoelzer, Karin; Ma, Yinqing; Gaalswyk, Dennis; Pouillot, Régis; Van Doren, Jane M
2016-01-13
Seven animal drugs [penicillin G (PENG), sulfadimethoxine (SDMX), oxytetracycline (OTET), erythromycin (ERY), ketoprofen (KETO), thiabendazole (THIA), and ivermectin (IVR)] were used to evaluate the drug distribution between milk fat and skim milk fractions of cow milk. More than 90% of the radioactivity was distributed into the skim milk fraction for ERY, KETO, OTET, PENG, and SDMX, approximately 80% for THIA, and 13% for IVR. The distribution of drug between milk fat and skim milk fractions was significantly correlated to the drug's lipophilicity (partition coefficient, log P, or distribution coefficient, log D, which includes ionization). Data were fit with linear mixed effects models; the best fit was obtained within this data set with log D versus observed drug distribution ratios. These candidate empirical models serve for assisting to predict the distribution and concentration of these drugs in a variety of milk and milk products.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.
Improved one-dimensional area law for frustration-free systems
NASA Astrophysics Data System (ADS)
Arad, Itai; Landau, Zeph; Vazirani, Umesh
2012-05-01
We present a new proof for the 1D area law for frustration-free systems with a constant gap, which exponentially improves the entropy bound in Hastingsâ 1D area law and which is tight to within a polynomial factor. For particles of dimension d, spectral gap ɛ>0, and interaction strength at most J, our entropy bound is S1D≤O(1)·X3log8X, where X=def(Jlogd)/ɛ. Our proof is completely combinatorial, combining the detectability lemma with basic tools from approximation theory. In higher dimensions, when the bipartitioning area is |∂L|, we use additional local structure in the proof and show that S≤O(1)·|∂L|2log6|∂L|·X3log8X. This is at the cusp of being nontrivial in the 2D case, in the sense that any further improvement would yield a subvolume law.
Effect of prior disturbances on the extent and severity of wildfire in Colorado subalpine forests.
Kulakowski, Dominik; Veblen, Thomas T
2007-03-01
Disturbances are important in creating spatial heterogeneity of vegetation patterns that in turn may affect the spread and severity of subsequent disturbances. Between 1997 and 2002 extensive areas of subalpine forests in northwestern Colorado were affected by a blowdown of trees, bark beetle outbreaks, and salvage logging. Some of these stands were also affected by severe fires in the late 19th century. During a severe drought in 2002, fires affected extensive areas of these subalpine forests. We evaluated and modeled the extent and severity of the 2002 fires in relation to these disturbances that occurred over the five years prior to the fires and in relation to late 19th century stand-replacing fires. Occurrence of disturbances prior to 2002 was reconstructed using a combination of tree-ring methods, aerial photograph interpretation, field surveys, and geographic information systems (GIS). The extent and severity of the 2002 fires were based on the normalized difference burn ratio (NDBR) derived from satellite imagery. GIS and classification trees were used to analyze the effects of prefire conditions on the 2002 fires. Previous disturbance history had a significant influence on the severity of the 2002 fires. Stands that were severely blown down (> 66% trees down) in 1997 burned more severely than other stands, and young (approximately 120 year old) postfire stands burned less severely than older stands. In contrast, prefire disturbances were poor predictors of fire extent, except that young (approximately 120 years old) postfire stands were less extensively burned than older stands. Salvage logging and bark beetle outbreaks that followed the 1997 blowdown (within the blowdown as well as in adjacent forest that was not blown down) did not appear to affect fire extent or severity. Conclusions regarding the influence of the beetle outbreaks on fire extent and severity are limited, however, by spatial and temporal limitations associated with aerial detection surveys of beetle activity. Thus, fire extent in these forests is largely independent of prefire disturbance history and vegetation conditions. In contrast, fire severity, even during extreme fire weather and in conjunction with a multiyear drought, is influenced by prefire stand conditions, including the history of previous disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabain, R.T.
1994-05-16
A rock strength analysis program, through intensive log analysis, can quantify rock hardness in terms of confined compressive strength to identify intervals suited for drilling with polycrystalline diamond compact (PDC) bits. Additionally, knowing the confined compressive strength helps determine the optimum PDC bit for the intervals. Computing rock strength as confined compressive strength can more accurately characterize a rock's actual hardness downhole than other methods. the information can be used to improve bit selections and to help adjust drilling parameters to reduce drilling costs. Empirical data compiled from numerous field strength analyses have provided a guide to selecting PDC drillmore » bits. A computer analysis program has been developed to aid in PDC bit selection. The program more accurately defines rock hardness in terms of confined strength, which approximates the in situ rock hardness downhole. Unconfined compressive strength is rock hardness at atmospheric pressure. The program uses sonic and gamma ray logs as well as numerous input data from mud logs. Within the range of lithologies for which the program is valid, rock hardness can be determine with improved accuracy. The program's output is typically graphed in a log format displaying raw data traces from well logs, computer-interpreted lithology, the calculated values of confined compressive strength, and various optional rock mechanic outputs.« less
Lewis, Vernard R; Leighton, Shawn; Tabuchi, Robin; Baldwin, James A; Haverty, Michael I
2013-02-01
Acoustic emission (AE) activity patterns were measured from seven loquat [Eriobotrya japonica (Thunb.) Lindl.] logs, five containing live western drywood termite [Incisitermes minor (Hagen)] infestations, and two without an active drywood termite infestation. AE activity, as well as temperature, were monitored every 3 min under unrestricted ambient conditions in a small wooden building, under unrestricted ambient conditions but in constant darkness, or in a temperature-controlled cabined under constant darkness. Logs with active drywood termite infestations displayed similar diurnal cycles of AE activity that closely followed temperature with a peak of AE activity late in the afternoon (1700-1800 hours). When light was excluded from the building, a circadian pattern continued and apparently was driven by temperature. When the seven logs were kept at a relatively constant temperature (approximately 23 +/- 0.9 degrees C) and constant darkness, the pattern of activity was closely correlated with temperature, even with minimal changes in temperature. Temperature is the primary driver of activity of these drywood termites, but the effects are different when temperature is increasing or decreasing. At constant temperature, AE activity was highly correlated with the number of termites in the logs. The possible implications of these findings on our understanding of drywood termite biology and how this information may affect inspections and posttreatment evaluations are discussed.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Behavioral evaluation of visual function of rats using a visual discrimination apparatus.
Thomas, Biju B; Samant, Deedar M; Seiler, Magdalene J; Aramant, Robert B; Sheikholeslami, Sharzad; Zhang, Kevin; Chen, Zhenhai; Sadda, SriniVas R
2007-05-15
A visual discrimination apparatus was developed to evaluate the visual sensitivity of normal pigmented rats (n=13) and S334ter-line-3 retinal degenerate (RD) rats (n=15). The apparatus is a modified Y maze consisting of two chambers leading to the rats' home cage. Rats were trained to find a one-way exit door leading into their home cage, based on distinguishing between two different visual alternatives (either a dark background or black and white stripes at varying luminance levels) which were randomly displayed on the back of each chamber. Within 2 weeks of training, all rats were able to distinguish between these two visual patterns. The discrimination threshold of normal pigmented rats was a luminance level of -5.37+/-0.05 log cd/m(2); whereas the threshold level of 100-day-old RD rats was -1.14+/-0.09 log cd/m(2) with considerable variability in performance. When tested at a later age (about 150 days), the threshold level of RD rats was significantly increased (-0.82+/-0.09 log cd/m(2), p<0.03, paired t-test). This apparatus could be useful to train rats at a very early age to distinguish between two different visual stimuli and may be effective for visual functional evaluations following therapeutic interventions.
Tomasino, Stephen F; Rastogi, Vipin K; Wallace, Lalena; Smith, Lisa S; Hamilton, Martin A; Pines, Rebecca M
2010-01-01
The quantitative Three-Step Method (TSM) for testing the efficacy of liquid sporicides against spores of Bacillus subtilis on a hard, nonporous surface (glass) was adopted as AOAC Official Method 2008.05 in May 2008. The TSM uses 5 x 5 x 1 mm coupons (carriers) upon which spores have been inoculated and which are introduced into liquid sporicidal agent contained in a microcentrifuge tube. Following exposure of inoculated carriers and neutralization, spores are removed from carriers in three fractions (gentle washing, fraction A; sonication, fraction B; and gentle agitation, fraction C). Liquid from each fraction is serially diluted and plated on a recovery medium for spore enumeration. The counts are summed over the three fractions to provide the density (viable spores per carrier), which is log10-transformed to arrive at the log density. The log reduction is calculated by subtracting the mean log density for treated carriers from the mean log density for control carriers. This paper presents a single-laboratory investigation conducted to evaluate the applicability of using two porous carrier materials (ceramic tile and untreated pine wood) and one alternative nonporous material (stainless steel). Glass carriers were included in the study as the reference material. Inoculated carriers were evaluated against three commercially available liquid sporicides (sodium hypochlorite, a combination of peracetic acid and hydrogen peroxide, and glutaraldehyde), each at two levels of presumed efficacy (medium and high) to provide data for assessing the responsiveness of the TSM. Three coupons of each material were evaluated across three replications at each level; three replications of a control were required. Even though all carriers were inoculated with approximately the same number of spores, the observed counts of recovered spores were consistently higher for the nonporous carriers. For control carriers, the mean log densities for the four materials ranged from 6.63 for wood to 7.14 for steel. The pairwise differences between mean log densities, except for glass minus steel, were statistically significant (P < 0.001). The repeatability standard deviations (Sr) for the mean control log density per test were similar for the four materials, ranging from 0.08 for wood to 0.13 for tile. Spore recovery from the carrier materials ranged from approximately 20 to 70%: 20% (pine wood), 40% (ceramic tile), 55% (glass), and 70% (steel). Although the percent spore recovery from pine wood was significantly lower than that from other materials, the performance data indicate that the TSM provides a repeatable and responsive test for determining the efficacy of liquid sporicides on both porous and nonporous materials.
Time-Lapse Measurement of Wellbore Integrity
NASA Astrophysics Data System (ADS)
Duguid, A.
2017-12-01
Well integrity is becoming more important as wells are used longer or repurposed. For CO2, shale gas, and other projects it has become apparent that wells represent the most likely unintended migration pathway for fluids out of the reservoir. Comprehensive logging programs have been employed to determine the condition of legacy wells in North America. These studies provide examples of assessment technologies. Logging programs have included pulsed neutron logging, ultrasonic well mapping, and cement bond logging. While these studies provide examples of what can be measured, they have only conducted a single round of logging and cannot show if the well has changed over time. Recent experience with time-lapse logging of three monitoring wells at a US Department of Energy sponsored CO2 project has shown the full value of similar tools. Time-lapse logging has shown that well integrity changes over time can be identified. It has also shown that the inclusion of and location of monitoring technologies in the well and the choice of construction materials must be carefully considered. Two of the wells were approximately eight years old at the time of study; they were constructed with steel and fiberglass casing sections and had lines on the outside of the casing running to the surface. The third well was 68 years old when it was studied and was originally constructed as a production well. Repeat logs were collected six or eight years after initial logging. Time-lapse logging showed the evolution of the wells. The results identified locations where cement degraded over time and locations that showed little change. The ultrasonic well maps show clearly that the lines used to connect the monitoring technology to the surface are visible and have a local effect on cement isolation. Testing and sampling was conducted along with logging. It provided insight into changes identified in the time-lapse log results. Point permeability testing was used to provide an in-situ point estimate of the cement isolating capacity. Cased-hole sidewall cores in the steel and fiberglass casing sections allowed analysis of bulk cement and the cement at the casing- and formation-interface. This presentation will cover how time-lapse logging was conducted, how the results may be applicable to other wells, and how monitoring well design may affect wellbore integrity.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
NASA Astrophysics Data System (ADS)
Cranganu, Constantin
2007-10-01
Many sedimentary basins throughout the world exhibit areas with abnormal pore-fluid pressures (higher or lower than normal or hydrostatic pressure). Predicting pore pressure and other parameters (depth, extension, magnitude, etc.) in such areas are challenging tasks. The compressional acoustic (sonic) log (DT) is often used as a predictor because it responds to changes in porosity or compaction produced by abnormal pore-fluid pressures. Unfortunately, the sonic log is not commonly recorded in most oil and/or gas wells. We propose using an artificial neural network to synthesize sonic logs by identifying the mathematical dependency between DT and the commonly available logs, such as normalized gamma ray (GR) and deep resistivity logs (REID). The artificial neural network process can be divided into three steps: (1) Supervised training of the neural network; (2) confirmation and validation of the model by blind-testing the results in wells that contain both the predictor (GR, REID) and the target values (DT) used in the supervised training; and 3) applying the predictive model to all wells containing the required predictor data and verifying the accuracy of the synthetic DT data by comparing the back-predicted synthetic predictor curves (GRNN, REIDNN) to the recorded predictor curves used in training (GR, REID). Artificial neural networks offer significant advantages over traditional deterministic methods. They do not require a precise mathematical model equation that describes the dependency between the predictor values and the target values and, unlike linear regression techniques, neural network methods do not overpredict mean values and thereby preserve original data variability. One of their most important advantages is that their predictions can be validated and confirmed through back-prediction of the input data. This procedure was applied to predict the presence of overpressured zones in the Anadarko Basin, Oklahoma. The results are promising and encouraging.
HPLC retention thermodynamics of grape and wine tannins.
Barak, Jennifer A; Kennedy, James A
2013-05-08
The effect of grape and wine tannin structure on retention thermodynamics under reversed-phase high-performance liquid chromatography conditions on a polystyrene divinylbenzene column was investigated. On the basis of retention response to temperature, an alternative retention factor was developed to approximate the combined temperature response of the complex, unresolvable tannin mixture. This alternative retention factor was based upon relative tannin peak areas separated by an abrupt change in solvent gradient. Using this alternative retention factor, retention thermodynamics were calculated. Van't Hoff relationships of the natural log of the alternative retention factor against temperature followed Kirchoff's relationship. An inverse quadratic equation was fit to the data, and from this the thermodynamic parameters for tannin retention were calculated. All tannin fractions exhibited exothermic, spontaneous interaction, with enthalpy-entropy compensation observed. Normalizing for tannin size, distinct tannin compositional effects on thermodynamic parameters were observed. The results of this study indicate that HPLC can be valuable for measuring the thermodynamics of tannin interaction with a hydrophobic surface and provides a potentially valuable alternative to calorimetry. Furthermore, the information gathered may provide insight into understanding red wine astringency quality.
Identification of lithology in Gulf of Mexico Miocene rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilterman, F.J.; Sherwood, J.W.C.; Schellhorn, R.
1996-12-31
In the Gulf of Mexico, many gas-saturated sands are not Bright Spots and thus are difficult to detect on conventional 3D seismic data. These small amplitude reflections occur frequently in Pliocene-Miocene exploration plays when the acoustic impedances of the gas-saturated sands and shales are approximately the same. In these areas, geophysicists have had limited success using AVO to reduce the exploration risk. The interpretation of the conventional AVO attributes is often difficult and contains questionable relationships to the physical properties of the media. A 3D AVO study was conducted utilizing numerous well-log suites, core analyses, and production histories to helpmore » calibrate the seismic response to the petrophysical properties. This study resulted in an extension of the AVO method to a technique that now displays Bright spots when very clean sands and gas-saturated sands occur. These litho-stratigraphic reflections on the new AVO technique are related to Poisson`s ratio, a petrophysical property that is normally mixed with the acoustic impedance on conventional 3D migrated data.« less
Rare earth element distribution in some hydrothermal minerals: evidence for crystallographic control
Morgan, J.W.; Wandless, G.A.
1980-01-01
Rare earth element (REE) abundances were measured by neutron activation analysis in anhydrite (CaSO4), barite (BaSO4), siderite (FeCO3) and galena (PbS). A simple crystal-chemical model qualitatively describes the relative affinities for REE substitution in anhydrite, barite, and siderite. When normalized to 'crustal' abundances (as an approximation to the hydrothermal fluid REE pattern), log REE abundance is a surprisingly linear function of (ionic radius of major cation-ionic radius of REE)2 for the three hydrothermal minerals, individually and collectively. An important exception, however, is Eu, which is anomalously enriched in barite and depleted in siderite relative to REE of neighboring atomic number and trivalent ionic radius. In principle, REE analyses of suitable pairs of co-existing hydrothermal minerals, combined with appropriate experimental data, could yield both the REE content and the temperature of the parental hydrothermal fluid. The REE have only very weak chalcophilic tendencies, and this is reflected by the very low abundances in galena-La, 0.6 ppb; Sm, 0.06 ppb; the remainder are below detection limits. ?? 1980.
Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix
NASA Astrophysics Data System (ADS)
Saxena, Nupur; Kumar, Pragati; Gupta, Vinay
2015-05-01
Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
NASA Astrophysics Data System (ADS)
Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.
2012-04-01
Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.
Radner, Wolfgang; Radner, Stephan; Raunig, Valerian; Diendorfer, Gabriela
2014-03-01
To evaluate reading performance of patients with monofocal intraocular lenses (IOLs) (Acrysof SN60WF) with or without reading glasses under bright and dim light conditions. Austrian Academy of Ophthalmology, Vienna, Austria. Evaluation of a diagnostic test or technology. In pseudophakic patients, the spherical refractive error was limited to between +0.50 diopter (D) and -0.75 D with astigmatism of 0.75 D (mean spherical equivalent: right eye, -0.08 ± 0.43 [SD]; left eye, -0.15 ± 0.35). Near addition was +2.75 D. Reading performance was assessed binocularly with or without reading glasses at an illumination of 100 candelas (cd)/m(2) and 4 cd/m(2) using the Radner Reading Charts. In the 25 patients evaluated, binocularly, the mean corrected distance visual acuity was -0.07 ± 0.06 logMAR and the mean uncorrected distance visual acuity was 0.01 ± 0.11 logMAR. The mean reading acuity with reading glasses was 0.02 ± 0.10 logRAD at 100 cd/m(2) and 0.12 ± 0.14 logRAD at 4 cd/m(2). Without reading glasses, it was 0.44 ± 0.13 logRAD and 0.56 ± 0.16 logRAD, respectively (P < .05). Without reading glasses and at 100 cd/m(2), 40% of patients read 0.4 logRAD at more than 80 words per minute (wpm), 68% exceeded this limit at 0.5 logRAD, and 92% exceeded it at 0.6 logRAD. The mean reading speed at 0.5 logRAD was 134.76 ± 48.22 wpm; with reading glasses it was 167.65 ± 32.77 wpm (P < .05). A considerable percentage of patients with monofocal IOLs read newspaper print size without glasses under good light conditions. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J
2007-03-01
Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.
Saxena, Aditi R; Seely, Ellen W; Rich-Edwards, Janet W; Wilkins-Haug, Louise E; Karumanchi, S Ananth; McElrath, Thomas F
2013-04-04
First trimester Pregnancy Associated Plasma Protein A (PAPP-A) levels, routinely measured for aneuploidy screening, may predict development of preeclampsia. This study tests the hypothesis that first trimester PAPP-A levels correlate with soluble fms-like tyrosine kinase-1 (sFlt-1) levels, an angiogenic marker associated with preeclampsia, throughout pregnancy. sFlt-1 levels were measured longitudinally in 427 women with singleton pregnancies in all three trimesters. First trimester PAPP-A and PAPP-A Multiples of Median (MOM) were measured. Student's T and Wilcoxon tests compared preeclamptic and normal pregnancies. A linear mixed model assessed the relationship between log PAPP-A and serial log sFlt-1 levels. PAPP-A and PAPP-A MOM levels were significantly lower in preeclamptic (n = 19), versus normal pregnancies (p = 0.02). Although mean third trimester sFlt-1 levels were significantly higher in preeclampsia (p = 0.002), first trimester sFlt-1 levels were lower in women who developed preeclampsia, compared with normal pregnancies (p = 0.03). PAPP-A levels correlated significantly with serial sFlt-1 levels. Importantly, low first trimester PAPP-A MOM predicted decreased odds of normal pregnancy (OR 0.2, p = 0.002). Low first trimester PAPP-A levels suggests increased future risk of preeclampsia and correlate with serial sFlt-1 levels throughout pregnancy. Furthermore, low first trimester PAPP-A status significantly predicted decreased odds of normal pregnancy.
The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.
Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F
2016-10-01
Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Ryan, James; Curran, Catherine E.; Hennessy, Emer; Newell, John; Morris, John C.; Kerin, Michael J.; Dwyer, Roisin M.
2011-01-01
Introduction The presence, relevance and regulation of the Sodium Iodide Symporter (NIS) in human mammary tissue remains poorly understood. This study aimed to quantify relative expression of NIS and putative regulators in human breast tissue, with relationships observed further investigated in vitro. Methods Human breast tissue specimens (malignant n = 75, normal n = 15, fibroadenoma n = 10) were analysed by RQ-PCR targeting NIS, receptors for retinoic acid (RARα, RARβ), oestrogen (ERα), thyroid hormones (THRα, THRβ), and also phosphoinositide-3-kinase (PI3K). Breast cancer cells were treated with Retinoic acid (ATRA), Estradiol and Thyroxine individually and in combination followed by analysis of changes in NIS expression. Results The lowest levels of NIS were detected in normal tissue (Mean(SEM) 0.70(0.12) Log10 Relative Quantity (RQ)) with significantly higher levels observed in fibroadenoma (1.69(0.21) Log10RQ, p<0.005) and malignant breast tissue (1.18(0.07) Log10RQ, p<0.05). Significant positive correlations were observed between human NIS and ERα (r = 0.22, p<0.05) and RARα (r = 0.29, p<0.005), with the strongest relationship observed between NIS and RARβ (r = 0.38, p<0.0001). An inverse relationship between NIS and PI3K expression was also observed (r = −0.21, p<0.05). In vitro, ATRA, Estradiol and Thyroxine individually stimulated significant increases in NIS expression (range 6–16 fold), while ATRA and Thyroxine combined caused the greatest increase (range 16–26 fold). Conclusion Although NIS expression is significantly higher in malignant compared to normal breast tissue, the highest level was detected in fibroadenoma. The data presented supports a role for retinoic acid and estradiol in mammary NIS regulation in vivo, and also highlights potential thyroidal regulation of mammary NIS mediated by thyroid hormones. PMID:21283523
Fractal Dimension Analysis of Transient Visual Evoked Potentials: Optimisation and Applications.
Boon, Mei Ying; Henry, Bruce Ian; Chu, Byoung Sun; Basahi, Nour; Suttle, Catherine May; Luu, Chi; Leung, Harry; Hing, Stephen
2016-01-01
The visual evoked potential (VEP) provides a time series signal response to an external visual stimulus at the location of the visual cortex. The major VEP signal components, peak latency and amplitude, may be affected by disease processes. Additionally, the VEP contains fine detailed and non-periodic structure, of presently unclear relevance to normal function, which may be quantified using the fractal dimension. The purpose of this study is to provide a systematic investigation of the key parameters in the measurement of the fractal dimension of VEPs, to develop an optimal analysis protocol for application. VEP time series were mathematically transformed using delay time, τ, and embedding dimension, m, parameters. The fractal dimension of the transformed data was obtained from a scaling analysis based on straight line fits to the numbers of pairs of points with separation less than r versus log(r) in the transformed space. Optimal τ, m, and scaling analysis were obtained by comparing the consistency of results using different sampling frequencies. The optimised method was then piloted on samples of normal and abnormal VEPs. Consistent fractal dimension estimates were obtained using τ = 4 ms, designating the fractal dimension = D2 of the time series based on embedding dimension m = 7 (for 3606 Hz and 5000 Hz), m = 6 (for 1803 Hz) and m = 5 (for 1000Hz), and estimating D2 for each embedding dimension as the steepest slope of the linear scaling region in the plot of log(C(r)) vs log(r) provided the scaling region occurred within the middle third of the plot. Piloting revealed that fractal dimensions were higher from the sampled abnormal than normal achromatic VEPs in adults (p = 0.02). Variances of fractal dimension were higher from the abnormal than normal chromatic VEPs in children (p = 0.01). A useful analysis protocol to assess the fractal dimension of transformed VEPs has been developed.
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
Contrast Sensitivity Perimetry and Clinical Measures of Glaucomatous Damage
Swanson, William H.; Malinovsky, Victor E.; Dul, Mitchell W.; Malik, Rizwan; Torbit, Julie K.; Sutton, Bradley M.; Horner, Douglas G.
2014-01-01
ABSTRACT Purpose To compare conventional structural and functional measures of glaucomatous damage with a new functional measure—contrast sensitivity perimetry (CSP-2). Methods One eye each was tested for 51 patients with glaucoma and 62 age-similar control subjects using CSP-2, size III 24-2 conventional automated perimetry (CAP), 24-2 frequency-doubling perimetry (FDP), and retinal nerve fiber layer (RNFL) thickness. For superior temporal (ST) and inferior temporal (IT) optic disc sectors, defect depth was computed as amount below mean normal, in log units. Bland-Altman analysis was used to assess agreement on defect depth, using limits of agreement and three indices: intercept, slope, and mean difference. A criterion of p < 0.0014 for significance used Bonferroni correction. Results Contrast sensitivity perimetry-2 and FDP were in agreement for both sectors. Normal variability was lower for CSP-2 than for CAP and FDP (F > 1.69, p < 0.02), and Bland-Altman limits of agreement for patient data were consistent with variability of control subjects (mean difference, −0.01 log units; SD, 0.11 log units). Intercepts for IT indicated that CSP-2 and FDP were below mean normal when CAP was at mean normal (t > 4, p < 0.0005). Slopes indicated that, as sector damage became more severe, CAP defects for IT and ST deepened more rapidly than CSP-2 defects (t > 4.3, p < 0.0005) and RNFL defects for ST deepened more slowly than for CSP, FDP, and CAP. Mean differences indicated that FDP defects for ST and IT were on average deeper than RNFL defects, as were CSP-2 defects for ST (t > 4.9, p < 0.0001). Conclusions Contrast sensitivity perimetry-2 and FDP defects were deeper than CAP defects in optic disc sectors with mild damage and revealed greater residual function in sectors with severe damage. The discordance between different measures of glaucomatous damage can be accounted for by variability in people free of disease. PMID:25259758
Contrast sensitivity perimetry and clinical measures of glaucomatous damage.
Swanson, William H; Malinovsky, Victor E; Dul, Mitchell W; Malik, Rizwan; Torbit, Julie K; Sutton, Bradley M; Horner, Douglas G
2014-11-01
To compare conventional structural and functional measures of glaucomatous damage with a new functional measure-contrast sensitivity perimetry (CSP-2). One eye each was tested for 51 patients with glaucoma and 62 age-similar control subjects using CSP-2, size III 24-2 conventional automated perimetry (CAP), 24-2 frequency-doubling perimetry (FDP), and retinal nerve fiber layer (RNFL) thickness. For superior temporal (ST) and inferior temporal (IT) optic disc sectors, defect depth was computed as amount below mean normal, in log units. Bland-Altman analysis was used to assess agreement on defect depth, using limits of agreement and three indices: intercept, slope, and mean difference. A criterion of p < 0.0014 for significance used Bonferroni correction. Contrast sensitivity perimetry-2 and FDP were in agreement for both sectors. Normal variability was lower for CSP-2 than for CAP and FDP (F > 1.69, p < 0.02), and Bland-Altman limits of agreement for patient data were consistent with variability of control subjects (mean difference, -0.01 log units; SD, 0.11 log units). Intercepts for IT indicated that CSP-2 and FDP were below mean normal when CAP was at mean normal (t > 4, p < 0.0005). Slopes indicated that, as sector damage became more severe, CAP defects for IT and ST deepened more rapidly than CSP-2 defects (t > 4.3, p < 0.0005) and RNFL defects for ST deepened more slowly than for CSP, FDP, and CAP. Mean differences indicated that FDP defects for ST and IT were on average deeper than RNFL defects, as were CSP-2 defects for ST (t > 4.9, p < 0.0001). Contrast sensitivity perimetry-2 and FDP defects were deeper than CAP defects in optic disc sectors with mild damage and revealed greater residual function in sectors with severe damage. The discordance between different measures of glaucomatous damage can be accounted for by variability in people free of disease.
NASA Astrophysics Data System (ADS)
Yang, Xiang I. A.; Park, George Ilhwan; Moin, Parviz
2017-10-01
Log-layer mismatch refers to a chronic problem found in wall-modeled large-eddy simulation (WMLES) or detached-eddy simulation, where the modeled wall-shear stress deviates from the true one by approximately 15 % . Many efforts have been made to resolve this mismatch. The often-used fixes, which are generally ad hoc, include modifying subgrid-scale stress models, adding a stochastic forcing, and moving the LES-wall-model matching location away from the wall. An analysis motivated by the integral wall-model formalism suggests that log-layer mismatch is resolved by the built-in physics-based temporal filtering. In this work we investigate in detail the effects of local filtering on log-layer mismatch. We show that both local temporal filtering and local wall-parallel filtering resolve log-layer mismatch without moving the LES-wall-model matching location away from the wall. Additionally, we look into the momentum balance in the near-wall region to provide an alternative explanation of how LLM occurs, which does not necessarily rely on the numerical-error argument. While filtering resolves log-layer mismatch, the quality of the wall-shear stress fluctuations predicted by WMLES does not improve with our remedy. The wall-shear stress fluctuations are highly underpredicted due to the implied use of LES filtering. However, good agreement can be found when the WMLES data are compared to the direct numerical simulation data filtered at the corresponding WMLES resolutions.
Normative Data for a User-friendly Paradigm for Pattern Electroretinogram Recording
Porciatti, Vittorio; Ventura, Lori M.
2009-01-01
Purpose To provide normative data for a user-friendly paradigm for the pattern electroretinogram (PERG) optimized for glaucoma screening (PERGLA). Design Prospective nonrandomized case series. Participants Ninety-three normal subjects ranging in age between 22 and 85 years. Methods A circular black–white grating of 25° visual angle, reversing 16.28 times per second, was presented on a television monitor placed inside a Ganzfeld bowl. The PERG was recorded simultaneously from both eyes with undilated pupils by means of skin cup electrodes taped over the lower eyelids. Reference electrodes were taped on the ipsilateral temples. Electrophysiologic signals were conventionally amplified, filtered, and digitized. Six hundred artifact-free repetitions were averaged. The response component at the reversal frequency was isolated automatically by digital Fourier transforms and was expressed as a deviation from the age-corrected average. The procedure took approximately 4 minutes. Main Outcome Measures Pattern electroretinogram amplitude (μV) and phase (π rad); response variability (coefficient of variation [CV] = standard deviation [SD] / mean × 100) of amplitude and phase of 2 partial averages that build up the PERG waveform; amplitude (μV) of background noise waveform, obtained by multiplying alternate sweeps by +1 and −1; and interocular asymmetry (CV of amplitude and phase of the PERG of the 2 eyes). Results On average, the PERG has a signal-to-noise ratio of more than 13:1. The CVs of intrasession and intersession variabilities in amplitude and phase are lower than 10% and 2%, respectively, and do not depend on the operator. The CV of interocular asymmetries in amplitude and phase are 9.8±8.8% and 1.5±1.4%, respectively. The PERG amplitude and phase decrease with age. Residuals of linear regression lines have normal distribution, with an SD of 0.1 log units for amplitude and 0.019 log units for phase. Age-corrected confidence limits (P<0.05) are defined as ±2 SD of residuals. Conclusions The PERGLA paradigm yields responses as reliable as the best previously reported using standard protocols. The ease of execution and interpretation of results of PERGLA indicate a potential value for objective screening and follow-up of glaucoma. PMID:14711729
Disturbance impacts on understory plant communities of the Colorado Front Range
Paula J. Fornwalt
2009-01-01
Pinus ponderosa - Pseudotsuga menziesii (ponderosa pine - Douglas-fir) forests of the Colorado Front Range have experienced a range of disturbances since they were settled by European-Americans approximately 150 years ago, including settlement-era logging and domestic grazing, and more recently, wildfire. In this dissertation, I...
Warfarin: history, tautomerism and activity
NASA Astrophysics Data System (ADS)
Porter, William R.
2010-06-01
The anticoagulant drug warfarin, normally administered as the racemate, can exist in solution in potentially as many as 40 topologically distinct tautomeric forms. Only 11 of these forms for each enantiomer can be distinguished by selected computational software commonly used to estimate octanol-water partition coefficients and/or ionization constants. The history of studies on warfarin tautomerism is reviewed, along with the implications of tautomerism to its biological properties (activity, protein binding and metabolism) and chemical properties (log P, log D, p K a). Experimental approaches to assessing warfarin tautomerism and computational results for different tautomeric forms are presented.
Investigations Regarding Anesthesia during Hypovolemic Conditions.
1982-09-25
i / b ,- 18 For each level of hemoglobin, the equation was "normalized" to a pH of 7.400 for a BE of zero and a PCO of 40.0 torr, Orr et al. (171...the shifted BE values. Curve nomogram. Using the equations resulting from the above curve- fitting procedure, we calculated the relationship between pH...model for a given BE (i.e., pH = m i log PCO 2 + bi). Solve the following set of equations for pHind and log dX - 0 d(PHind) where X = (pHl - pHind) 2
Hemodynamic and thermal responses to head and neck cooling in men and women
NASA Technical Reports Server (NTRS)
Ku, Y. T.; Montgomery, L. D.; Webbon, B. W.
1996-01-01
Personal cooling systems are used to alleviate symptoms of multiple sclerosis and to prevent increased core temperature during daily activities. The objective of this study was to determine the operating characteristics and the physiologic changes produced by short term use of one commercially available thermal control system. A Life Support Systems, Inc. Mark VII portable cooling system and a liquid cooling helmet were used to cool the head and neck regions of 12 female and 12 male subjects (25-55 yr) in this study. The healthy subjects, seated in an upright position at normal room temperature (approximately 21 degrees C), were tested for 30 min with the liquid cooling garment operated at its maximum cooling capacity. Electrocardiograms and scalp and intracranial blood flows were recorded periodically during each test sequence. Scalp, right and left ear, and oral temperatures and cooling system parameters were logged every 5 min. Scalp, right and left ear canal, and oral temperatures were all significantly (P <0.05) reduced by 30 min of head and neck cooling. Oral temperatures decreased approximately 0.2-0.6 degrees C after 30 min and continued to decrease further (approximately 0.1-0.2 degrees C) for a period of approximately 10 min after removal of the cooling helmet. Intracranial blood flow decreased significantly (P < 0.05) during the first 10 min of the cooling period. Both right and left ear temperatures in the women were significantly lower than those of the men during the cooling period. These data indicate that head and neck cooling may be used to reduce core temperature to that needed for symptomatic relief of both male and female multiple sclerosis patients. This study quantifies the operating characteristics of one liquid cooling garment as an example of the information needed to compare the efficiency of other garments operated under different test conditions.
Geophysical evaluation of sandstone aquifers in the Reconcavo-Tucano Basin, Bahia -- Brazil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, O.A.L. de
1993-11-01
The upper clastic sediments in the Reconcavo-Tucano basin comprise a multilayer aquifer system of Jurassic age. Its groundwater is normally fresh down to depths of more than 1,000 m. Locally, however, there are zones producing high salinity or sulfur geothermal water. Analysis of electrical logs of more than 150 wells enabled the identification of the most typical sedimentary structures and the gross geometries for the sandstone units in selected areas of the basin. Based on this information, the thick sands are interpreted as coalescent point bars and the shales as flood plain deposits of a large fluvial environment. The resistivitymore » logs and core laboratory data are combined to develop empirical equations relating aquifer porosity and permeability to log-derived parameters such as formation factor and cementation exponent. Temperature logs of 15 wells were useful to quantify the water leakage through semiconfining shales. The groundwater quality was inferred from spontaneous potential (SP) log deflections under control of chemical analysis of water samples. An empirical chart is developed that relates the SP-derived water resistivity to the true water resistivity within the formations. The patterns of salinity variation with depth inferred from SP logs were helpful in identifying subsurface flows along major fault zones, where extensive mixing of water is taking place. A total of 49 vertical Schlumberger resistivity soundings aid in defining aquifer structures and in extrapolating the log derived results. Transition zones between fresh and saline waters have also been detected based on a combination of logging and surface sounding data. Ionic filtering by water leakage across regional shales, local convection and mixing along major faults and hydrodynamic dispersion away from lateral permeability contrasts are the main mechanisms controlling the observed distributions of salinity and temperature within the basin.« less
NASA Astrophysics Data System (ADS)
Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo
2016-12-01
We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.
Chauzeix, Jasmine; Laforêt, Marie-Pierre; Deveza, Mélanie; Crowther, Liam; Marcellaud, Elodie; Derouault, Paco; Lia, Anne-Sophie; Boyer, François; Bargues, Nicolas; Olombel, Guillaume; Jaccard, Arnaud; Feuillard, Jean; Gachard, Nathalie; Rizzo, David
2018-05-09
More than 35 years after the Binet classification, there is still a need for simple prognostic markers in chronic lymphocytic leukemia (CLL). Here, we studied the treatment-free survival (TFS) impact of normal serum protein electrophoresis (SPE) at diagnosis. One hundred twelve patients with CLL were analyzed. The main prognostic factors (Binet stage; lymphocytosis; IGHV mutation status; TP53, SF3B1, NOTCH1, and BIRC3 mutations; and cytogenetic abnormalities) were studied. The frequencies of IGHV mutation status, cytogenetic abnormalities, and TP53, SF3B1, NOTCH1, and BIRC3 mutations were not significantly different between normal and abnormal SPE. Normal SPE was associated with Binet stage A, nonprogressive disease for 6 months, lymphocytosis below 30 G/L, and the absence of the IGHV3-21 gene rearrangement which is associated with poor prognosis. The TFS of patients with normal SPE was significantly longer than that of patients with abnormal SPE (log-rank test: P = 0.0015, with 51% untreated patients at 5.6 years and a perfect plateau afterward vs. a median TFS at 2.64 years for abnormal SPE with no plateau). Multivariate analysis using two different Cox models and bootstrapping showed that normal SPE was an independent good prognostic marker for either Binet stage, lymphocytosis, or IGHV mutation status. TFS was further increased when both normal SPE and mutated IGHV were present (log-rank test: P = 0.008, median not reached, plateau at 5.6 years and 66% untreated patients). A comparison with other prognostic markers suggested that normal SPE could reflect slowly advancing CLL disease. Altogether, our results show that a combination of normal SPE and mutated IGHV genes defines a subgroup of patients with CLL who evolve very slowly and who might never need treatment. © 2018 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes
NASA Technical Reports Server (NTRS)
Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark
2000-01-01
Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately < A(sub V) approximately < 5) lines-of-sight with decreasing quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.
NASA Technical Reports Server (NTRS)
Ku, Y. T.; Montgomery, L. D.; Wenzel, K. C.; Webbon, B. W.; Burks, J. S.
1999-01-01
Personal cooling systems are used to alleviate symptoms of multiple sclerosis and to prevent increased core temperature during daily activities. The objective of this study was to determine the thermal and physiologic responses of patients with multiple sclerosis to short-term maximal head and neck cooling. A Life Support Systems, Inc. Mark VII portable cooling system and a liquid cooling helmet were used to cool the head and neck regions of 24 female and 26 male patients with multiple sclerosis in this study. The subjects, seated in an upright position at normal room temperature (approximately 22 degrees C), were cooled for 30 min by the liquid cooling garment, which was operated at its maximum cooling capacity. Oral, right, and left ear temperatures and cooling system parameters were logged manually every 5 min. Forearm, calf, chest, and rectal temperatures, heart rate, and respiration rate were recorded continuously on a U.F.I., Inc. Biolog ambulatory monitor. This protocol was performed during the winter and summer to investigate the seasonal differences in the way patients with multiple sclerosis respond to head and neck cooling. No significant differences were found between the male and female subject group's mean rectal or oral temperature responses during any phase of the experiment. The mean oral temperature decreased significantly (P < 0.05) for both groups approximately 0.3 degrees C after 30 min of cooling and continued to decrease further (approximately 0.1-0.2 degrees C) for a period of approximately 15 min after removal of the cooling helmet. The mean rectal temperatures decreased significantly (P < 0.05) in both male and female subjects in the winter studies (approximately 0.2-0.3 degrees C) and for the male subjects during the summer test (approximately 0.2 degrees C). However, the rectal temperature of the female subjects did not change significantly during any phase of the summer test. These data indicate that head and neck cooling may, in general, be used to reduce the oral and body temperatures of both male and female patients with multiple sclerosis by the approximate amount needed for symptomatic relief as shown by other researchers. However, thermal response of patients with multiple sclerosis may be affected by gender and seasonal factors, which should be considered in the use of liquid cooling therapy.
NASA Astrophysics Data System (ADS)
Keller, M. M.; d'Oliveira, M. N.; Takemura, C. M.; Vitoria, D.; Araujo, L. S.; Morton, D. C.
2012-12-01
Selective logging, the removal of several valuable timber trees per hectare, is an important land use in the Brazilian Amazon and may degrade forests through long term changes in structure, loss of forest carbon and species diversity. Similar to deforestation, the annual area affected by selected logging has declined significantly in the past decade. Nonetheless, this land use affects several thousand km2 per year in Brazil. We studied a 1000 ha area of the Antimary State Forest (FEA) in the State of Acre, Brazil (9.304 ○S, 68.281 ○W) that has a basal area of 22.5 m2 ha-1 and an above-ground biomass of 231 Mg ha-1. Logging intensity was low, approximately 10 to 15 m3 ha-1. We collected small-footprint airborne lidar data using an Optech ALTM 3100EA over the study area once each in 2010 and 2011. The study area contained both recent and older logging that used both conventional and technologically advanced logging techniques. Lidar return density averaged over 20 m-2 for both collection periods with estimated horizontal and vertical precision of 0.30 and 0.15 m. A relative density model comparing returns from 0 to 1 m elevation to returns in 1-5 m elevation range revealed the pattern of roads and skid trails. These patterns were confirmed by ground-based GPS survey. A GIS model of the road and skid network was built using lidar and ground data. We tested and compared two pattern recognition approaches used to automate logging detection. Both segmentation using commercial eCognition segmentation and a Frangi filter algorithm identified the road and skid trail network compared to the GIS model. We report on the effectiveness of these two techniques.
Schacht, Veronika J; Grant, Sharon C; Escher, Beate I; Hawker, Darryl W; Gaus, Caroline
2016-06-01
Partitioning of super-hydrophobic organic contaminants (SHOCs) to dissolved or colloidal materials such as surfactants can alter their behaviour by enhancing apparent aqueous solubility. Relevant partition constants are, however, challenging to quantify with reasonable accuracy. Partition constants to colloidal surfactants can be measured by introducing a polymer (PDMS) as third phase with known PDMS-water partition constant in combination with the mass balance approach. We quantified partition constants of PCBs and PCDDs (log KOW 5.8-8.3) between water and sodium dodecyl sulphate monomers (KMO) and micelles (KMI). A refined, recently introduced swelling-based polymer loading technique allowed highly precise (4.5-10% RSD) and fast (<24 h) loading of SHOCs into PDMS, and due to the miniaturisation of batch systems equilibrium was reached in <5 days for KMI and <3 weeks for KMO. SHOC losses to experimental surfaces were substantial (8-26%) in monomer solutions, but had a low impact on KMO (0.10-0.16 log units). Log KMO for PCDDs (4.0-5.2) were approximately 2.6 log units lower than respective log KMI, which ranged from 5.2 to 7.0 for PCDDs and 6.6-7.5 for PCBs. The linear relationship between log KMI and log KOW was consistent with more polar and moderately hydrophobic compounds. Apparent solubility increased with increasing hydrophobicity and was highest in micelle solutions. However, this solubility enhancement was also considerable in monomer solutions, up to 200 times for OCDD. Given the pervasive presence of surfactant monomers in typical field scenarios, these data suggest that low surfactant concentrations may be effective long-term facilitators for subsurface transport of SHOCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Keskinen, Lindsey A; Burke, Angela; Annous, Bassam A
2009-06-30
This study compared the efficacy of chlorine (20-200 ppm), acidic electrolyzed water (50 ppm chlorine, pH 2.6), acidified sodium chlorite (20-200 ppm chlorite ion concentration, Sanova), and aqueous chlorine dioxide (20-200 ppm chlorite ion concentration, TriNova) washes in reducing populations of Escherichia coli O157:H7 on artificially inoculated lettuce. Fresh-cut leaves of Romaine or Iceberg lettuce were inoculated by immersion in water containing E. coli O157:H7 (8 log CFU/ml) for 5 min and dried in a salad spinner. Leaves (25 g) were then washed for 2 min, immediately or following 24 h of storage at 4 degrees C. The washing treatments containing chlorite ion concentrations of 100 and 200 ppm were the most effective against E. coli O157:H7 populations on Iceberg lettuce, with log reductions as high as 1.25 log CFU/g and 1.05 log CFU/g for TriNova and Sanova wash treatments, respectively. All other wash treatments resulted in population reductions of less than 1 log CFU/g. Chlorine (200 ppm), TriNova, Sanova, and acidic electrolyzed water were all equally effective against E. coli O157:H7 on Romaine, with log reductions of approximately 1 log CFU/g. The 20 ppm chlorine wash was as effective as the deionized water wash in reducing populations of E. coli O157:H7 on Romaine and Iceberg lettuce. Scanning electron microscopy indicated that E. coli O157:H7 that was incorporated into biofilms or located in damage lettuce tissue remained on the lettuce leaf, while individual cells on undamaged leaf surfaces were more likely to be washed away.
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
Akhter, Gulraiz; Farid, Asim; Ahmad, Zulfiqar
2012-01-01
Velocity and density measured in a well are crucial for synthetic seismic generation which is, in turn, a key to interpreting real seismic amplitude in terms of lithology, porosity and fluid content. Investigations made in the water wells usually consist of spontaneous potential, resistivity long and short normal, point resistivity and gamma ray logs. The sonic logs are not available because these are usually run in the wells drilled for hydrocarbons. To generate the synthetic seismograms, sonic and density logs are required, which are useful to precisely mark the lithology contacts and formation tops. An attempt has been made to interpret the subsurface soil of the aquifer system by means of resistivity to seismic inversion. For this purpose, resistivity logs and surface resistivity sounding were used and the resistivity logs were converted to sonic logs whereas surface resistivity sounding data transformed into seismic curves. The converted sonic logs and the surface seismic curves were then used to generate synthetic seismograms. With the utilization of these synthetic seismograms, pseudo-seismic sections have been developed. Subsurface lithologies encountered in wells exhibit different velocities and densities. The reflection patterns were marked by using amplitude standout, character and coherence. These pseudo-seismic sections were later tied to well synthetics and lithologs. In this way, a lithology section was created for the alluvial fill. The cross-section suggested that the eastern portion of the studied area mainly consisted of sandy fill and the western portion constituted clayey part. This can be attributed to the depositional environment by the Indus and the Kabul Rivers.
Removal of micro-organisms in a small-scale hydroponics wastewater treatment system.
Ottoson, J; Norström, A; Dalhammar, G
2005-01-01
To measure the microbial removal capacity of a small-scale hydroponics wastewater treatment plant. Paired samples were taken from untreated, partly-treated and treated wastewater and analysed for faecal microbial indicators, i.e. coliforms, Escherichia coli, enterococci, Clostridium perfringens spores and somatic coliphages, by culture based methods. Escherichia coli was never detected in effluent water after >5.8-log removal. Enterococci, coliforms, spores and coliphages were removed by 4.5, 4.1, 2.3 and 2.5 log respectively. Most of the removal (60-87%) took place in the latter part of the system because of settling, normal inactivation (retention time 12.7 d) and sand filtration. Time-dependent log-linear removal was shown for spores (k = -0.17 log d(-1), r(2) = 0.99). Hydroponics wastewater treatment removed micro-organisms satisfactorily. Investigations on the microbial removal capacity of hydroponics have only been performed for bacterial indicators. In this study it has been shown that virus and (oo)cyst process indicators were removed and that hydroponics can be an alternative to conventional wastewater treatment.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
O'Boyle, Cathy; Chen, Sean I; Little, Julie-Anne
2017-04-01
Clinically, picture acuity tests are thought to overestimate visual acuity (VA) compared with letter tests, but this has not been systematically investigated in children with amblyopia. This study compared VA measurements with the LogMAR Crowded Kay Picture test to the LogMAR Crowded Keeler Letter acuity test in a group of young children with amblyopia. 58 children (34 male) with amblyopia (22 anisometropic, 18 strabismic and 18 with both strabismic/anisometropic amblyopia) aged 4-6 years (mean=68.7, range=48-83 months) underwent VA measurements. VA chart testing order was randomised, but the amblyopic eye was tested before the fellow eye. All participants wore up-to-date refractive correction. The Kay Picture test significantly overestimated VA by 0.098 logMAR (95% limits of agreement (LOA), 0.13) in the amblyopic eye and 0.088 logMAR (95% LOA, 0.13) in the fellow eye, respectively (p<0.001). No interactions were found from occlusion therapy, refractive correction or type of amblyopia on VA results (p>0.23). For both the amblyopic and fellow eyes, Bland-Altman plots demonstrated a systematic and predictable difference between Kay Picture and Keeler Letter charts across the range of acuities tested (Keeler acuity: amblyopic eye 0.75 to -0.05 logMAR; fellow eye 0.45 to -0.15 logMAR). Linear regression analysis (p<0.00001) and also slope values close to one (amblyopic 0.98, fellow 0.86) demonstrate that there is no proportional bias. The Kay Picture test consistently overestimated VA by approximately 0.10 logMAR when compared with the Keeler Letter test in young children with amblyopia. Due to the predictable difference found between both crowded logMAR acuity tests, it is reasonable to adjust Kay Picture acuity thresholds by +0.10 logMAR to compute expected Keeler Letter acuity scores. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Bias and Variance Approximations for Estimators of Extreme Quantiles
1988-11-01
r u - g(u). The errors of these approximations are, respectively, O ...The conditions required for this are yrci, yr+ypci. Taking the special cases r -1, r -1 and the limit r -) O , we deduce Jelog g(Y) 6 2folog g(Y) ~ e( 3+2y...a 2 (log g(TipL, o , o )) - I + I- exp-a" a a r - (- + Z - Ze - Z + (Z 2 - z~eZ + Z3 e - Z) + 0(y 2 )) 2 18 and using the formula E[Zre- sz1 - (_-) r r ( r
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Pereira, R. V.; Bicalho, M. L.; Machado, V. S.; Lima, S.; Teixeira, A. G.; Warnick, L. D.; Bicalho, R. C.
2015-01-01
Raw milk and colostrum can harbor dangerous micro-organisms that can pose serious health risks for animals and humans. According to the USDA, more than 58% of calves in the United States are fed unpasteurized milk. The aim of this study was to evaluate the effect of UV light on reduction of bacteria in milk and colostrum, and on colostrum IgG. A pilot-scale UV light continuous (UVC) flow-through unit (45 J/cm2) was used to treat milk and colostrum. Colostrum and sterile whole milk were inoculated with Listeria innocua, Mycobacterium smegmatis, Salmonella serovar Typhimurium, Escherichia coli, Staphylococcus aureus, Streptococcus agalactiae, and Acinetobacter baumannii before being treated with UVC. During UVC treatment, samples were collected at 5 time points and bacteria were enumerated using selective media. The effect of UVC on IgG was evaluated using raw colostrum from a nearby dairy farm without the addition of bacteria. For each colostrum batch, samples were collected at several different time points and IgG was measured using ELISA. The UVC treatment of milk resulted in a significant final count (log cfu/mL) reduction of Listeria monocytogenes (3.2 ± 0.3 log cfu/mL reduction), Salmonella spp. (3.7 ± 0.2 log cfu/mL reduction), Escherichia coli (2.8 ± 0.2 log cfu/mL reduction), Staph. aureus (3.4 ± 0.3 log cfu/mL reduction), Streptococcus spp. (3.4 ± 0.4 log cfu/mL reduction), and A. baumannii (2.8 ± 0.2 log cfu/mL reduction). The UVC treatment of milk did not result in a significant final count (log cfu/mL) reduction for M. smegmatis (1.8 ± 0.5 log cfu/mL reduction). The UVC treatment of colostrum was significantly associated with a final reduction of bacterial count (log cfu/mL) of Listeria spp. (1.4 ± 0.3 log cfu/mL reduction), Salmonella spp. (1.0 ± 0.2 log cfu/mL reduction), and Acinetobacter spp. (1.1 ± 0.3 log cfu/mL reduction), but not of E. coli (0.5 ± 0.3 log cfu/mL reduction), Strep. agalactiae (0.8 ± 0.2 log cfu/mL reduction), and Staph. aureus (0.4 ± 0.2 log cfu/mL reduction). The UVC treatment of colostrum significantly decreased the IgG concentration, with an observed final mean IgG reduction of approximately 50%. Development of new methods to reduce bacterial contaminants in colostrum must take into consideration the barriers imposed by its opacity and organic components, and account for the incidental damage to IgG caused by manipulating colostrum. PMID:24582452
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
Geophysical logs for selected wells in the Picher Field, northeast Oklahoma and southeast Kansas
Christenson, Scott C.; Thomas, Tom B.; Overton, Myles D.; Goemaat, Robert L.; Havens, John S.
1991-01-01
The Roubidoux aquifer in northeastern Oklahoma is used extensively as a source of water for public supplies, commerce, industry, and rural water districts. The Roubidoux aquifer may be subject to contamination from abandoned lead and zinc mines of the Picher field. Water in flooded underground mines contains large concentrations of iron, zinc, cadmium, and lead. The contaminated water may migrate from the mines to the Roubidoux aquifer through abandoned water wells in the Picher field. In late 1984, the Oklahoma Water Resources Board began to locate abandoned wells that might be serving as conduits for the migration of contaminants from the abandoned mines. These wells were cleared of debris and plugged. A total of 66 wells had been located, cleared, and plugged by July 1985. In cooperation with the Oklahoma Water Resources Board, the U.S. Geological Survey took advantage of the opportunity to obtain geophysical data in the study area and provide the Oklahoma Water Resources Board with data that might be useful during the well-plugging operation. Geophysical logs obtained by the U.S. Geological Survey are presented in this report. The geophysical logs include hole diameter, normal, single-point resistance, fluid resistivity, natural-gamma, gamma-gamma, and neutron logs. Depths logged range from 145 to 1,344 feet.
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).
Response Strength in Extreme Multiple Schedules
McLean, Anthony P; Grace, Randolph C; Nevin, John A
2012-01-01
Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess resistance to change. Contrary to the generalized matching law, logarithms of response ratios in the two components were not a linear function of log reinforcer ratios, implying a failure of parameter invariance. Over a 2 log unit range, the function appeared linear and indicated undermatching, but in conditions with more extreme reinforcer ratios, approximate matching was observed. A model suggested by McLean (1991), originally for local contrast, predicts these changes in sensitivity to reinforcer ratios somewhat better than models by Herrnstein (1970) and by Williams and Wixted (1986). Prefeeding tests of resistance to change were conducted at each reinforcer ratio, and relative resistance to change was also a nonlinear function of log reinforcer ratios, again contrary to conclusions from previous work. Instead, the function suggests that resistance to change in a component may be determined partly by the rate of reinforcement and partly by the ratio of reinforcers to responses. PMID:22287804
NASA Technical Reports Server (NTRS)
Stern, Robert A.; Lemen, James R.; Schmitt, Jurgen H. M. M.; Pye, John P.
1995-01-01
We report results from the first extreme ultraviolet spectrum of the prototypical eclipsing binary Algol (beta Per), obtained with the spectrometers on the Extreme Ultraviolet Explorer (EUVE). The Algol spectrum in the 80-350 A range is dominated by emission lines of Fe XVI-XXIV, and the He II 304 A line. The Fe emission is characteristic of high-temperature plasma at temperatures up to at least log T approximately 7.3 K. We have successfully modeled the observed quiescent spectrum using a continuous emission measure distribution with the bulk of the emitting material at log T greater than 6.5. We are able to adequately fit both the coronal lines and continuum data with a cosmic abundance plasma, but only if Algol's quiescent corona is dominated by material at log T greater than 7.5, which is physically ruled out by prior X-ray observations of the quiescent Algol spectrum. Since the coronal (Fe/H) abundance is the principal determinant of the line-to-continuum ratio in the EUV, allowing the abundance to be a free parameter results in models with a range of best-fit abundances approximately = 15%-40% of solar photospheric (Fe/H). Since Algol's photospheric (Fe/H) appears to be near-solar, the anomalous EUV line-to-continuum ratio could either be the result of element segregation in the coronal formation process, or other, less likely mechanisms that may enhance the continuum with respect to the lines.
An Integrable Approximation for the Fermi Pasta Ulam Lattice
NASA Astrophysics Data System (ADS)
Rink, Bob
This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.
PERFLUORINATED COMPOUNDS IN ARCHIVED HOUSE-DUST SAMPLES
Archived house-dust samples were analyzed for 13 perfluorinated compounds (PFCs). Results show that PFCs are found in house-dust samples, and the data are log-normally distributed. PFOS/PFOA were present in 94.6% and 96.4% of the samples respectively. Concentrations ranged fro...
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Case-Deletion Diagnostics for Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Lee, Sik-Yum; Lu, Bin
2003-01-01
In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…
Dynamic predictive model for growth of Bacillus cereus from spores in cooked beans
USDA-ARS?s Scientific Manuscript database
Kinetic growth data of Bacillus cereus from spores in cooked beans at several isothermal conditions (between 10 to 49C) were collected. Samples were inoculated with approximately 2 log CFU/g of heat-shocked (80C/10 min) spores and stored at isothermal temperatures. B. cereus populations were deter...
Geothermal state and fluid flow within ODP Hole 843B: results from wireline logging
NASA Astrophysics Data System (ADS)
Wiggins, Sean M.; Hildebrand, John A.; Gieskes, Joris M.
2002-02-01
Borehole fluid temperatures were measured with a wireline re-entry system in Ocean Drilling Program Hole 843B, the site of the Ocean Seismic Network Pilot Experiment. These temperature data, recorded more than 7 years after drilling, are compared to temperature data logged during Leg 136, approximately 1 day after drilling had ceased. Qualitative interpretations of the temperature data suggest that fluid flowed slowly downward in the borehole immediately following drilling, and flowed slowly upward 7 years after drilling. Quantitative analysis suggests that the upward fluid flow rate in the borehole is approximately 1 m/h. Slow fluid flow interpreted from temperature data only, however, requires estimates of other unmeasured physical properties. If fluid flows upward in Hole 843B, it may have led to undesirable noise for the borehole seismometer emplaced in this hole as part of the Ocean Seismic Network Pilot Experiment. Estimates of conductive heat flow from ODP Hole 843B are 51 mW/m 2 for the sediment and the basalt. These values are lower than the most recent Hawaiian Arch seafloor heat flow studies.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
Lognormal-like statistics of a stochastic squeeze process
NASA Astrophysics Data System (ADS)
Shapira, Dekel; Cohen, Doron
2017-10-01
We analyze the full statistics of a stochastic squeeze process. The model's two parameters are the bare stretching rate w and the angular diffusion coefficient D . We carry out an exact analysis to determine the drift and the diffusion coefficient of log(r ) , where r is the radial coordinate. The results go beyond the heuristic lognormal description that is implied by the central limit theorem. Contrary to the common "quantum Zeno" approximation, the radial diffusion is not simply Dr=(1 /8 ) w2/D but has a nonmonotonic dependence on w /D . Furthermore, the calculation of the radial moments is dominated by the far non-Gaussian tails of the log(r ) distribution.
Bidisperse and polydisperse suspension rheology at large solid fraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.
At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Fingerprinting breakthrough curves in soils
NASA Astrophysics Data System (ADS)
Koestel, J. K.
2017-12-01
Conservative solute transport through soil is predominantly modeled using a few standard solute transport models like the convection dispersion equation or the mobile-immobile model. The adequacy of these models is seldom investigated in detail as it would require knowledge on the 3-D spatio-temporal evolution of the solute plume that is normally not available. Instead, shape-measures of breakthrough curves (BTCs) such as the apparent dispersivity and the relative 5%-arrival time may be used to fingerprint breakthrough curves as well as forward solutions of solute transport models. In this fashion the similarity of features from measured and modeled BTC data becomes quantifiable. In this study I am presenting a new set of shape-measures that characterize the log-log tailings of BTC. I am using the new shape measures alongside with more established ones to map the features of BTCs obtained forward models of the convective dispersive equation, log-normal and Gamma transfer functions, the mobile-immobile model and the continuous time random walk model with respect to their input parameters. In a second step, I am comparing corresponding shape-measures for 206 measured BTCs extracted from peer-reviewed literature. Preliminary results show that power-law tailings are very common in BTCs from soil samples and that BTC features that are exclusive to a mobile-immobile type solute transport process are very rarely found.
Jin, Jie; Sun, Ke; Liu, Wei; Li, Shiwei; Peng, Xianqiang; Yang, Yan; Han, Lanfang; Du, Ziwen; Wang, Xiangke
2018-05-01
Chemical composition and pollutant sorption of biochar-derived organic matter fractions (BDOMs) are critical for understanding the long-term environmental significance of biochar. Phenanthrene (PHE) sorption by the humic acid-like (HAL) fractions isolated from plant straw- (PLABs) and animal manure-based (ANIBs) biochars, and the residue materials (RES) after HAL extraction was investigated. The HAL fraction comprised approximately 50% of organic carbon (OC) of the original biochars. Results of XPS and 13 C NMR demonstrated that the biochar-derived HAL fractions mainly consisted of aromatic clusters substituted by carboxylic groups. The CO 2 cumulative surface area of BDOMs excluding PLAB-derived RES fractions was obviously lower than that of corresponding biochars. The sorption nonlinearity of PHE by the fresh biochars was significantly stronger than that of the BDOM fractions, implying that the BDOM fractions were more chemically homogeneous. The BDOMs generally exhibited comparable or higher OC-normalized distribution coefficients (K oc ) of PHE than the original biochars. The PHE logK oc values of the fresh biochars correlated negatively with the micropore volumes due to steric hindrance effect. In contrast, a positive relationship between the sorption coefficients (K d ) of BDOMs and the micropore volumes was observed in this study, suggesting that pore filling could dominate PHE sorption by the BDOMs. The positive correlation between the PHE logK oc values of the HAL fractions and the aromatic C contents indicates that PHE sorption by the HAL fractions was regulated by aromatic domains. The findings of this study improve our knowledge of the evolution of biochar properties after application and its potential environmental impacts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Smalling, Kelly L.; Morgan, Steven; Kuivila, Kathryn K.
2010-01-01
Invertebrates have long been used as resident sentinels for assessing ecosystem health and productivity. The shore crabs, Hemigrapsus oregonensis and Pachygrapsus crassipes, are abundant in estuaries and beaches throughout northern California, USA and have been used as indicators of habitat conditions in several salt marshes. The overall objectives of the present study were to conduct a lab-based study to test the accumulation of current-use pesticides, validate the analytical method and to analyze field-collected crabs for a suite of 74 current-use and legacy pesticides. A simple laboratory uptake study was designed to determine if embryos could bioconcentrate the herbicide molinate over a 7-d period. At the end of the experiment, embryos were removed from the crabs and analyzed by gas chromatography/mass spectrometry. Although relatively hydrophilic (log KOW of 2.9), molinate did accumulate with an estimated bioconcentration factor (log BCF) of approximately 2.5. Following method validation, embryos were collected from two different Northern California salt marshes and analyzed. In field-collected embryos 18 current-use and eight organochlorine pesticides were detected including synthetic pyrethroids and organophosphate insecticides, as well as DDT and its degradates. Lipid-normalized concentrations of the pesticides detected in the field-collected crab embryos ranged from 0.1 to 4 ppm. Pesticide concentrations and profiles in crab embryos were site specific and could be correlated to differences in land-use practices. These preliminary results indicate that embryos are an effective sink for organic contaminants in the environment and have the potential to be good indicators of ecosystem health, especially when contaminant body burden analyses are paired with reproductive impairment assays.
Stress state and its anomaly observations in the vicinity of a fault in NanTroSEIZE Expedition 322
NASA Astrophysics Data System (ADS)
Wu, Hung-Yu; Saito, Saneatsu; Kinoshita, Masataka
2015-12-01
To better understand the stress state and geological properties within the shallow Shikoku Basin, southwest of Japan, two sites, C0011A and C0011B, were drilled in open-ocean sediments using Logging While Drilling (LWD) and coring, respectively. Resistivity image logging was performed at C0011A from sea floor to 950 m below sea floor (mbsf). At C0011B, the serial coring was obtained in order to determine physical properties from 340 to 880 mbsf. For the LWD images, a notable breakout anomaly was observed at a depth of 615 m. Using resistivity images and a stress polygon, the potential horizontal principal stress azimuth and its magnitude within the 500-750 mbsf section of the C0011A borehole were constrained. Borehole breakout azimuths were observed for the variation by the existence of a fault zone at a depth of 615 mbsf. Out of this fracture zone, the breakout azimuth was located at approximately 109° ± 12°, subparallel to the Nankai Trough convergence vector (300-315°). Our calculations describe a stress drop was determined based on the fracture geometry. A close 90° (73° ± 12°) rotation implied a 100% stress drop, defined as a maximum shear stress drop equal to 1 MPa. The magnitude of the horizontal principal stresses near the fracture stress anomaly ranged between 49 and 52 MPa, and the bearing to the vertical stress (Sv = 52 MPa) was found to be within the normal-faulting stress regime. Low rock strength and a low stress level are necessary to satisfy the observations.
NASA Astrophysics Data System (ADS)
Breitzke, Monika; Bohlen, Thomas
2010-05-01
Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.
Smalling, Kelly L; Morgan, Steven; Kuivila, Kathryn K
2010-11-01
Invertebrates have long been used as resident sentinels for assessing ecosystem health and productivity. The shore crabs, Hemigrapsus oregonensis and Pachygrapsus crassipes, are abundant in estuaries and beaches throughout northern California, USA and have been used as indicators of habitat conditions in several salt marshes. The overall objectives of the present study were to conduct a lab-based study to test the accumulation of current-use pesticides, validate the analytical method and to analyze field-collected crabs for a suite of 74 current-use and legacy pesticides. A simple laboratory uptake study was designed to determine if embryos could bioconcentrate the herbicide molinate over a 7-d period. At the end of the experiment, embryos were removed from the crabs and analyzed by gas chromatography/mass spectrometry. Although relatively hydrophilic (log K(OW) of 2.9), molinate did accumulate with an estimated bioconcentration factor (log BCF) of approximately 2.5. Following method validation, embryos were collected from two different Northern California salt marshes and analyzed. In field-collected embryos 18 current-use and eight organochlorine pesticides were detected including synthetic pyrethroids and organophosphate insecticides, as well as DDT and its degradates. Lipid-normalized concentrations of the pesticides detected in the field-collected crab embryos ranged from 0.1 to 4 ppm. Pesticide concentrations and profiles in crab embryos were site specific and could be correlated to differences in land-use practices. These preliminary results indicate that embryos are an effective sink for organic contaminants in the environment and have the potential to be good indicators of ecosystem health, especially when contaminant body burden analyses are paired with reproductive impairment assays. © 2010 SETAC.
The Star Formation Main Sequence in the Hubble Space Telescope Frontier Fields
NASA Astrophysics Data System (ADS)
Santini, Paola; Fontana, Adriano; Castellano, Marco; Di Criscienzo, Marcella; Merlin, Emiliano; Amorin, Ricardo; Cullen, Fergus; Daddi, Emanuele; Dickinson, Mark; Dunlop, James S.; Grazian, Andrea; Lamastra, Alessandra; McLure, Ross J.; Michałowski, Michał. J.; Pentericci, Laura; Shu, Xinwen
2017-09-01
We investigate the relation between star formation rate (SFR) and stellar mass (M), I.e., the main sequence (MS) relation of star-forming galaxies, at 1.3≤slant z< 6 in the first four Hubble Space Telescope (HST) Frontier Fields, on the basis of rest-frame UV observations. Gravitational lensing combined with deep HST observations allows us to extend the analysis of the MS down to {log} M/{M}⊙ ˜ 7.5 at z≲ 4 and {log} M/{M}⊙ ˜ 8 at higher redshifts, a factor of ˜10 below most previous results. We perform an accurate simulation to take into account the effect of observational uncertainties and correct for the Eddington bias. This step allows us to reliably measure the MS and in particular its slope. While the normalization increases with redshift, we fit an unevolving and approximately linear slope. We nicely extend to lower masses the results of brighter surveys. Thanks to the large dynamic range in mass and by making use of the simulation, we analyzed any possible mass dependence of the dispersion around the MS. We find tentative evidence that the scatter decreases with increasing mass, suggesting a larger variety of star formation histories in low-mass galaxies. This trend agrees with theoretical predictions and is explained as either a consequence of the smaller number of progenitors of low-mass galaxies in a hierarchical scenario and/or of the efficient but intermittent stellar feedback processes in low-mass halos. Finally, we observe an increase in the SFR per unit stellar mass with redshift milder than predicted by theoretical models, implying a still incomplete understanding of the processes responsible for galaxy growth.
Watt, Timothy J; Duan, Jian J
2014-08-01
Spathius galinae Belokobylskij and Strazenac (Hymenoptera: Braconidae) is a recently discovered gregarious idiobiont larval ectoparasitoid currently being evaluated for biological control against the invasive emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae) in the United States. To aid in the development of laboratory rearing protocols, we assessed the influence of various emerald ash borer stages on critical fitness parameters of S. galinae. We exposed gravid S. galinae females to emerald ash borer host larvae of various ages (3.5, 5, 7, and 10 wk post egg oviposition) that were reared naturally in tropical (evergreen) ash (Fraxinus uhdei (Wenzig) Lingelsh) logs, or to field-collected, late-stage emerald ash borers (nonfeeding J-shaped larvae termed "J-larvae," prepupae, and pupae) that were artificially inserted into green ash logs. When exposed to larvae in tropical ash logs, S. galinae attacked 5 and 7 wk hosts more frequently (68-76%) than 3.5 wk (23%) and 10 wk (12%) hosts. Subsample dissections of the these logs revealed that 3.5, 5, 7 and 10 wk host logs contained mostly second, third, fourth, and J-larvae, respectively, that had already bored into the sapwood for diapause. No J-larvae were attacked by S. galinae when naturally reared in tropical ash logs. When parasitized by S. galinae, 7 and 10 wk hosts produced the largest broods (approximately 6.7 offspring per parasitized host), and the progenies that emerged from these logs had larger anatomical measurements and more female-biased sex ratios. When exposed to emerald ash borer J-larvae, prepupae, or pupae artificially inserted into green ash logs, S. galinae attacked 53% ofJ-larvae, but did not attack any prepupae or pupae. We conclude that large (fourth instar) emerald ash borer larvae should be used to rear S. galinae.
Responses of crayfish photoreceptor cells following intense light adaptation.
Cummins, D R; Goldsmith, T H
1986-01-01
After intense orange adapting exposures that convert 80% of the rhodopsin in the eye to metarhodopsin, rhabdoms become covered with accessory pigment and appear to lose some microvillar order. Only after a delay of hours or even days is the metarhodopsin replaced by rhodopsin (Cronin and Goldsmith 1984). After 24 h of dark adaptation, when there has been little recovery of visual pigment, the photoreceptor cells have normal resting potentials and input resistances, and the reversal potential of the light response is 10-15 mV (inside positive), unchanged from controls. The log V vs log I curve is shifted about 0.6 log units to the right on the energy axis, quantitatively consistent with the decrease in the probability of quantum catch expected from the lowered concentration of rhodopsin in the rhabdoms. Furthermore, at 24 h the photoreceptors exhibit a broader spectral sensitivity than controls, which is also expected from accumulations of metarhodopsin in the rhabdoms. In three other respects, however, the transduction process appears to be light adapted: The voltage responses are more phasic than those of control photoreceptors. The relatively larger effect (compared to controls) of low extracellular Ca++ (1 mmol/l EGTA) in potentiating the photoresponses suggests that the photoreceptors may have elevated levels of free cytoplasmic Ca++. The saturating depolarization is only about 30% as large as the maximal receptor potentials of contralateral, dark controls, and by that measure the log V-log I curve is shifted downward by 0.54 log units.(ABSTRACT TRUNCATED AT 250 WORDS)
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
Objective straylight assessment of the human eye with a novel device
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Klemm, Matthias; Haueisen, Jens; Baumgarten, Daniel
2016-03-01
Forward scattered light from the anterior segment of the human eye can be measured by Shack-Hartmann (SH) wavefront aberrometers with limited visual angle. We propose a novel Point Spread Function (PSF) reconstruction algorithm based on SH measurements with a novel measurement devise to overcome these limitations. In our optical setup, we use a Digital Mirror Device as variable field stop, which is conventionally a pinhole suppressing scatter and reflections. Images with 21 different stop diameters were captured and from each image the average subaperture image intensity and the average intensity of the pupil were computed. The 21 intensities represent integral values of the PSF which is consequently reconstructed by derivation with respect to the visual angle. A generalized form of the Stiles-Holladay-approximation is fitted to the PSF resulting in a stray light parameter Log(IS). Additionaly the transmission loss of eye is computed. For the proof of principle, a study on 13 healthy young volunteers was carried out. Scatter filters were positioned in front of the volunteer's eye during C-Quant and scatter measurements to generate straylight emulating scatter in the lens. The straylight parameter is compared to the C-Quant measurement parameter Log(ISC) and scatter density of the filters SDF with a partial correlation. Log(IS) shows significant correlation with the SDF and Log(ISC). The correlation is more prominent between Log(IS) combined with the transmission loss and the SDF and Log(ISC). Our novel measurement and reconstruction technique allow for objective stray light analysis of visual angles up to 4 degrees.
Physiological and morphological responses of pine and willow saplings to post-fire salvage logging
NASA Astrophysics Data System (ADS)
Millions, E. L.; Letts, M. G.; Harvey, T.; Rood, S. B.
2015-12-01
With global warming, forest fires may be increasing in frequency, and post-fire salvage logging may become more common. The ecophysiological impacts of this practice on tree saplings remain poorly understood. In this study, we examined the physiological and morphological impacts of increased light intensity, due to post-fire salvage logging, on the conifer Pinus contorta (pine) and deciduous broadleaf Salix lucida (willow) tree and shrub species in the Crowsnest Pass region of southern Alberta. Photosynthetic gas-exchange and plant morphological measurements were taken throughout the summer of 2013 on approximately ten year-old saplings of both species. Neither species exhibited photoinhibition, but different strategies were observed to acclimate to increased light availability. Willow saplings were able to slightly elevate their light-saturated rate of net photosynthesis (Amax) when exposed to higher photosynthetic photon flux density (PPFD), thus increasing their growth rate. Willow also exhibited increased leaf inclination angles and leaf mass per unit area (LMA), to decrease light interception in the salvage-logged plot. By contrast, pine, which exhibited lower Amax and transpiration (E), but higher water-use efficiency (WUE = Amax/E) than willow, increased the rate at which electrons were moved through and away from the photosynthetic apparatus in order to avoid photoinhibition. Acclimation indices were higher in willow saplings, consistent with the hypothesis that species with short-lived foliage exhibit greater acclimation. LMA was higher in pine saplings growing in the logged plot, but whole-plant and branch-level morphological acclimation was limited and more consistent with a response to decreased competition in the logged plot, which had much lower stand density.
Lewis, Jack; Rhodes, Jonathan J; Bradley, Curtis
2018-04-11
The Battle Creek watershed in northern California was historically important for its Chinook salmon populations, now at remnant levels due to land and water uses. Privately owned portions of the watershed are managed primarily for timber production, which has intensified since 1998, when clearcutting became widespread. Turbidity has been monitored by citizen volunteers at 13 locations in the watershed. Approximately 2000 grab samples were collected in the 5-year analysis period as harvesting progressed, a severe wildfire burned 11,200 ha, and most of the burned area was salvage logged. The data reveal strong associations of turbidity with the proportion of area harvested in watersheds draining to the measurement sites. Turbidity increased significantly over the measurement period in 10 watersheds and decreased at one. Some of these increases may be due to the influence of wildfire, logging roads and haul roads. However, turbidity continued trending upwards in six burned watersheds that were logged after the fire, while decreasing or remaining the same in two that escaped the fire and post-fire logging. Unusually high turbidity measurements (more than seven times the average value for a given flow condition) were very rare (0.0% of measurements) before the fire but began to appear in the first year after the fire (5.0% of measurements) and were most frequent (11.6% of measurements) in the first 9 months after salvage logging. Results suggest that harvesting contributes to road erosion and that current management practices do not fully protect water quality.
NASA Technical Reports Server (NTRS)
Tueller, J.; Mushotzky, R. F.; Barthelmy, S.; Cannizzo, J. K.; Gehrels, N.; Markwardt, C. B.; Skinner, G. K.; Winter, L. M.
2008-01-01
We present the results1 of the analysis of the first 9 months of data of the Swift BAT survey of AGN in the 14-195 keV band. Using archival X-ray data or follow-up Swift XRT observations, we have identified 129 (103 AGN) of 130 objects detected at [b] > 15deg and with significance > 4.8-delta. One source remains unidentified. These same X-ray data have allowed measurement of the X-ray properties of the objects. We fit a power law to the logN - log S distribution, and find the slope to be 1.42+/-0.14. Characterizing the differential luminosity function data as a broken power law, we find a break luminosity logL*(ergs/s)= 43.85+/-0.26. We obtain a mean photon index 1.98 in the 14-195 keV band, with an rms spread of 0.27. Integration of our luminosity function gives a local volume density of AGN above 10(exp 41) erg/s of 2.4x10(exp -3) Mpc(sup -3), which is about 10% of the total luminous local galaxy density above M* = -19.75. We have obtained X-ray spectra from the literature and from Swift XRT follow-up observations. These show that the distribution of log nH is essentially flat from nH = 10(exp 20)/sq cm to 10(exp 24)/sq cm, with 50% of the objects having column densities of less than 10(exp 22)/sq cm. BAT Seyfert galaxies have a median redshift of 0.03, a maximum log luminosity of 45.1, and approximately half have log nH > 22.
Eganhouse, Robert P.
2016-01-01
Polymer-water partition coefficients (Kpw) of ten DDT-related compounds were determined in pure water at 25 °C using commercial polydimethylsiloxane-coated optical fiber. Analyte concentrations were measured by thermal desorption-gas chromatography/full scan mass spectrometry (TD–GC/MSFS; fibers) and liquid injection-gas chromatography/selected ion monitoring mass spectrometry (LI–GC/MSSIM; water). Equilibrium was approached from two directions (fiber uptake and depletion) as a means of assessing data concordance. Measured compound-specific log Kpw values ranged from 4.8 to 6.1 with an average difference in log Kpw between the two approaches of 0.05 log units (∼12% of Kpw). Comparison of the experimentally-determined log Kpw values with previously published data confirmed the consistency of the results and the reliability of the method. A second experiment was conducted with the same ten DDT-related compounds and twelve selected PCB (polychlorinated biphenyl) congeners under conditions characteristic of a coastal marine field site (viz., seawater, 11 °C) that is currently under investigation for DDT and PCB contamination. Equilibration at lower temperature and higher ionic strength resulted in an increase in log Kpw for the DDT-related compounds of 0.28–0.49 log units (61–101% of Kpw), depending on the analyte. The increase in Kpw would have the effect of reducing by approximately half the calculated freely dissolved pore-water concentrations (Cfree). This demonstrates the importance of determining partition coefficients under conditions as they exist in the field.
Morin, Roger H.; Urish, Daniel W.
1995-01-01
The Cape Cod National Seashore comprises part of Provincetown, Massachusetts, which lies at the northern tip of Cape Cod. The hydrologic regime in this area consists of unconsolidated sand-and-gravel deposits that constitute a highly permeable aquifer within which is a freshwater lens floating on denser sea water. A network of wells was installed into this aquifer to monitor a leachate plume emanating from the Provincetown landfill. Wells were located along orthogonal transects perpendicular to and parallel to the general groundwater flow path from the landfill to the seashore approximately 1,000 m to the southeast. Temperature, epithermal neutron, natural gamma. and electronmagnetic induction logs were obtained in five wells to depths ranging from 23 to 37 m. These logs identify the primary contamination and show that its movement is controlled by and confined within a dominant hydrostratigraphic unit about 2 to 5 m thick that exhibits low porosity, large representative grain size, and high relative permeability. A relation is also found between the temperaturegradient logs and water quality, with the gradient traces serving as effective delineators of the contaminant plume in wells nearest the landfill. Contamination is not detectable in the well nearest the seashore and farthest from the landfill, and the induction log from this well clearly identifies the freshwater/seawater transition zone at a depth of about 18 m. The geophysical logs provide fundamental information concerning the spatial distribution of aquifer properties near the landfill and lend valuable insight into how these properties influence the migration of the leachate plume to the sea.
Takács-Novák, K; Szász, G
1999-10-01
The ion-pair partition of quaternary ammonium (QA) pharmacons with organic counter ions of different lipophilicity, size, shape and flexibility was studied to elucidate relationships between ion-pair formation and chemical structure. The apparent partition coefficient (P') of 4 QAs was measured in octanol/pH 7.4 phosphate buffer system by the shake-flask method as a function of molar excess of ten counter ions (Y), namely: mesylate (MES), acetate (AC), pyruvate (PYRU), nicotinate (NIC), hydrogenfumarate (HFUM), hydrogenmaleate (HMAL), p-toluenesulfonate (PTS), caproate (CPR), deoxycholate (DOC) and prostaglandin E1 anion (PGE1). Based on 118 of highly precise logP' values (SD< 0.05), the intrinsic lipophilicity (without external counter ions) and the ion-pair partition of QAs (with different counter ions) were characterized. Linear correlation was found between the logP' of ion-pairs and the size of the counter ions described by the solvent accessible surface area (SASA). The lipophilicity increasing effect of the counter ions were quantified and the following order was established: DOC approximate to PGE1 > CPR approximate to PTS > NIC approximate to HMAL > PYRU approximate to AC approximate to MES approximate to HFUM. Analyzing the lipophilicity/molar ratio (QA:Y) profile, the differences in the ion-pair formation were shown and attributed to the differences in the flexibility/rigidity and size both of QA and Y. Since the largest (in average, 300 X) lipophilicity enhancement was found by the influence of DOC and PGE1 and considerable (on average 40 X) increase was observed by CPR and PTS, it was concluded that bile acids and prostaglandin anions may play a significant role in the ion-pair transport of quaternary ammonium drugs and caproic acid and p-toluenesulfonic acid may be useful salt forming agents to improve the pharmacokinetics of hydrophilic drugs.
Rockwell, Cara A.; Guariguata, Manuel R.; Menton, Mary; Arroyo Quispe, Eriks; Quaedvlieg, Julia; Warren-Thomas, Eleanor; Fernandez Silva, Harol; Jurado Rojas, Edwin Eduardo; Kohagura Arrunátegui, José Andrés Hideki; Meza Vega, Luis Alberto; Revilla Vera, Olivia; Valera Tito, Jonatan Frank; Villarroel Panduro, Betxy Tabita; Yucra Salas, Juan José
2015-01-01
Although many examples of multiple-use forest management may be found in tropical smallholder systems, few studies provide empirical support for the integration of selective timber harvesting with non-timber forest product (NTFP) extraction. Brazil nut (Bertholletia excelsa, Lecythidaceae) is one of the world’s most economically-important NTFP species extracted almost entirely from natural forests across the Amazon Basin. An obligate out-crosser, Brazil nut flowers are pollinated by large-bodied bees, a process resulting in a hard round fruit that takes up to 14 months to mature. As many smallholders turn to the financial security provided by timber, Brazil nut fruits are increasingly being harvested in logged forests. We tested the influence of tree and stand-level covariates (distance to nearest cut stump and local logging intensity) on total nut production at the individual tree level in five recently logged Brazil nut concessions covering about 4000 ha of forest in Madre de Dios, Peru. Our field team accompanied Brazil nut harvesters during the traditional harvest period (January-April 2012 and January-April 2013) in order to collect data on fruit production. Three hundred and ninety-nine (approximately 80%) of the 499 trees included in this study were at least 100 m from the nearest cut stump, suggesting that concessionaires avoid logging near adult Brazil nut trees. Yet even for those trees on the edge of logging gaps, distance to nearest cut stump and local logging intensity did not have a statistically significant influence on Brazil nut production at the applied logging intensities (typically 1–2 timber trees removed per ha). In one concession where at least 4 trees ha-1 were removed, however, the logging intensity covariate resulted in a marginally significant (0.09) P value, highlighting a potential risk for a drop in nut production at higher intensities. While we do not suggest that logging activities should be completely avoided in Brazil nut rich forests, when a buffer zone cannot be observed, low logging intensities should be implemented. The sustainability of this integrated management system will ultimately depend on a complex series of socioeconomic and ecological interactions. Yet we submit that our study provides an important initial step in understanding the compatibility of timber harvesting with a high value NTFP, potentially allowing for diversification of forest use strategies in Amazonian Perù. PMID:26271042
Norhana, M N Wan; Azman, Mohd Nor A; Poole, Susan E; Deeth, Hilton C; Dykes, Gary A
2009-11-30
The potential of using juice of bilimbi (Averrhoa bilimbi L.) and tamarind (Tamarindus indica L.) to reduce Listeria monocytogenes Scott A and Salmonella Typhimurium ATCC 14028 populations on raw shrimps after washing and during storage (4 degrees C) was investigated. The uninoculated raw shrimps and those inoculated with approximately 9 log cfu/ml of L. monocytogenes Scott A and S. Typhimurium ATCC 14028 were washed (dipped or rubbed) in distilled water (SDW) (control), bilimbi or tamarind juice at 1:4 (w/v) concentrations for 10 and 5 min. Naturally occurring aerobic bacteria (APC), L. monocytogenes Scott A and S. Typhimurium ATCC 14028 counts, pH values and sensory analysis of washed shrimps were determined immediately after washing (day 0), and on days 3 and 7 of storage. Compared to SDW, bilimbi and tamarind juice significantly (p<0.05) reduced APC (0.40-0.70 log cfu/g), L. monocytogenes Scott A (0.84-1.58 log cfu/g) and S. Typhimurium ATCC 14028 (1.03-2.00 log cfu/g) populations immediately after washing (0 day). There was a significant difference (p<0.05) in bacterial reduction between the dipping (0.40-0.41 log for APC; 0.84 for L. monocytogenes Scott A and 1.03-1.09 log for S. Typhimurium ATCC 14028) and rubbing (0.68-0.70 log for APC; 1.34-1.58 for L. monocytogenes Scott A and 1.67-2.00 log for S. Typhimurium ATCC 14028) methods. Regardless of washing treatments or methods, populations of S. Typhimurium ATCC 14028 decreased slightly (5.10-6.29 log cfu/g on day 7 of storage) while populations of L. monocytogenes Scott A (8.74-9.20 log cfu/g) and APC (8.68-8.92 log cfu/g) increased significantly during refrigerated storage. The pH of experimental shrimps were significantly (p<0.05) decreased by 0.15-0.22 pH units after washing with bilimbi and tamarind juice. The control, bilimbi or tamarind-washed shrimps did not differ in sensory panellist acceptability (p>0.05) throughout the storage except for odour (p<0.05) attributes at 0 day when acidic or lemony smell was noticed in bilimbi- and tamarind-washed shrimps and not in control shrimps.
Rockwell, Cara A; Guariguata, Manuel R; Menton, Mary; Arroyo Quispe, Eriks; Quaedvlieg, Julia; Warren-Thomas, Eleanor; Fernandez Silva, Harol; Jurado Rojas, Edwin Eduardo; Kohagura Arrunátegui, José Andrés Hideki; Meza Vega, Luis Alberto; Revilla Vera, Olivia; Quenta Hancco, Roger; Valera Tito, Jonatan Frank; Villarroel Panduro, Betxy Tabita; Yucra Salas, Juan José
2015-01-01
Although many examples of multiple-use forest management may be found in tropical smallholder systems, few studies provide empirical support for the integration of selective timber harvesting with non-timber forest product (NTFP) extraction. Brazil nut (Bertholletia excelsa, Lecythidaceae) is one of the world's most economically-important NTFP species extracted almost entirely from natural forests across the Amazon Basin. An obligate out-crosser, Brazil nut flowers are pollinated by large-bodied bees, a process resulting in a hard round fruit that takes up to 14 months to mature. As many smallholders turn to the financial security provided by timber, Brazil nut fruits are increasingly being harvested in logged forests. We tested the influence of tree and stand-level covariates (distance to nearest cut stump and local logging intensity) on total nut production at the individual tree level in five recently logged Brazil nut concessions covering about 4000 ha of forest in Madre de Dios, Peru. Our field team accompanied Brazil nut harvesters during the traditional harvest period (January-April 2012 and January-April 2013) in order to collect data on fruit production. Three hundred and ninety-nine (approximately 80%) of the 499 trees included in this study were at least 100 m from the nearest cut stump, suggesting that concessionaires avoid logging near adult Brazil nut trees. Yet even for those trees on the edge of logging gaps, distance to nearest cut stump and local logging intensity did not have a statistically significant influence on Brazil nut production at the applied logging intensities (typically 1-2 timber trees removed per ha). In one concession where at least 4 trees ha-1 were removed, however, the logging intensity covariate resulted in a marginally significant (0.09) P value, highlighting a potential risk for a drop in nut production at higher intensities. While we do not suggest that logging activities should be completely avoided in Brazil nut rich forests, when a buffer zone cannot be observed, low logging intensities should be implemented. The sustainability of this integrated management system will ultimately depend on a complex series of socioeconomic and ecological interactions. Yet we submit that our study provides an important initial step in understanding the compatibility of timber harvesting with a high value NTFP, potentially allowing for diversification of forest use strategies in Amazonian Perù.
Statistics of baryon correlation functions in lattice QCD
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration
2017-12-01
A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.
Safi, Sare; Rahimi, Anoushiravan; Raeesi, Afsaneh; Safi, Hamid; Aghazadeh Amiri, Mohammad; Malek, Mojtaba; Yaseri, Mehdi; Haeri, Mohammad; Middleton, Frank A; Solessio, Eduardo; Ahmadieh, Hamid
2017-01-01
To evaluate the ability of contrast sensitivity (CS) to discriminate loss of visual function in diabetic subjects with no clinical signs of retinopathy relative to that of normal subjects. In this prospective cross-sectional study, we measured CS in 46 diabetic subjects with a mean age of 48±6 years, a best-corrected visual acuity of 20/20 and no signs of diabetic retinopathy. The CS in these subjects was compared with CS measurements in 46 normal control subjects at four spatial frequencies (3, 6, 12, 18 cycles per degree) under moderate (500 lux) and dim (less than 2 lux) background light conditions. CS was approximately 0.16 log units lower in patients with diabetes relative to controls both in moderate and in dim background light conditions. Logistic regression classification and receiver operating characteristic curve analysis indicated that CS analysis using two light conditions was more accurate (0.78) overall compared with CS analysis using only a single illumination condition (accuracy values were 0.67 and 0.70 in moderate and dim light conditions, respectively). Our results showed that patients with diabetes without clinical signs of retinopathy exhibit a uniform loss in CS at all spatial frequencies tested. Measuring the loss in CS at two spatial frequencies (3 and 6 cycles per degree) and two light conditions (moderate and dim) is sufficiently robust to classify diabetic subjects with no retinopathy versus control subjects.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Spatial contrast sensitivity at twilight: luminance, monocularity, and oxygenation.
Connolly, Desmond M
2010-05-01
Visual performance in dim light is compromised by lack of oxygen (hypoxia). The possible influence of altered oxygenation on foveal contrast sensitivity under mesopic (twilight) viewing conditions is relevant to aircrew flying at night, including when using night vision devices, but is poorly documented. Foveal contrast sensitivity was measured binocularly and monocularly in 12 subjects at 7 spatial frequencies, ranging from 0.5 to approximately 16 cycles per degree, using sinusoidal Gabor patch gratings. Hypoxic performance breathing 14.1% oxygen, equivalent to altitude exposure at 3048 m (10,000 ft), was compared with breathing air at sea level (normoxia) at low photopic (28 cd x m(-2)), borderline upper mesopic (approximately 2.1 cd x m(-2)) and midmesopic (approximately 0.26 cd x m(-2)) luminance. Mesopic performance was also assessed breathing 100% oxygen (hyperoxia). Typical 'inverted U' log/log plots of the contrast sensitivity function were obtained, with elevated thresholds (reduced sensitivity) at lower luminance. Binocular viewing enhanced sensitivity by a factor approximating square root of 2 for most conditions, supporting neural summation of the contrast signal, but had greater influence at the lowest light level and highest spatial frequencies (8.26 and 16.51 cpd). Respiratory challenges had no effect. Contrast sensitivity is poorer when viewing monocularly and especially at midmesopic luminance, with relevance to night flying. The foveal contrast sensitivity function is unaffected by respiratory disturbance when twilight conditions favor cone vision, despite known effects on retinal illumination (pupil size). The resilience of the contrast sensitivity function belies the vulnerability of foveal low contrast acuity to mild hypoxia at mesopic luminance.
Consensual pupillary light response in the red-eared slider turtle (Trachemys scripta elegans).
Dearworth, James R; Sipe, Grayson O; Cooper, Lori J; Brune, Erin E; Boyd, Angela L; Riegel, Rhae A L
2010-03-17
Purpose of this study was to determine if the turtle has a consensual pupillary light response (cPLR), and if so, to compare it to its direct pupillary light response (dPLR). One eye was illuminated with different intensities of light over a four log range while keeping the other eye in darkness. In the eye directly illuminated, pupil diameter was reduced by as much as approximately 31%. In the eye not stimulated by light, pupil diameter was also reduced but less to approximately 11%. When compared to the directly illuminated eye, this generated a ratio, cPLR-dPLR, equal to 0.35. Ratio of slopes for log/linear fits to plots of pupil changes versus retinal irradiance for non-illuminated (-1.27) to illuminated (-3.94) eyes closely matched at 0.32. cPLR had time constants ranging from 0.60 to 1.20min; however, they were comparable and not statistically different from those of the dPLR, which ranged from 1.41 to 2.00min. Application of mydriatic drugs to the directly illuminated eye also supported presence of a cPLR. Drugs reduced pupil constriction by approximately 9% for the dPLR and slowed its time constant to 9.58min while simultaneous enhancing constriction by approximately 6% for the cPLR. Time constant for the cPLR at 1.75min, however, was not changed. Results support that turtle possesses a cPLR although less strong than its dPLR. Copyright 2010 Elsevier Ltd. All rights reserved.
Gender impact on first trimester markers in Down syndrome screening.
Larsen, Severin Olesen; Wøjdemann, Karen R; Shalmi, Anne-Cathrine; Sundberg, Karin; Christiansen, Michael; Tabor, Ann
2002-12-01
The influence of fetal gender on the level in the first trimester of the serological markers alpha-fetoprotein (AFP), pregnancy-associated plasma protein-A (PAPP-A) and free beta human chorionic gonadotropin (betahCG) and on nuchal translucency is described for 2637 singleton pregnancies with normal outcome. Mean log MoM values for pregnancies with female and male fetuses were calculated using regression of log marker values on gestational age expressed as crown rump length and on maternal weight. A pronounced gender impact was found for free betahCG, being 16% higher for female than for male fetuses. Copyright 2002 John Wiley & Sons, Ltd.
Electrical resistivity well-logging system with solid-state electronic circuitry
Scott, James Henry; Farstad, Arnold J.
1977-01-01
An improved 4-channel electrical resistivity well-logging system for use with a passive probe with electrodes arranged in the 'normal' configuration has been designed and fabricated by Westinghouse Electric Corporation to meet technical specifications developed by the U.S. Geological Survey. Salient features of the system include solid-state switching and current regulation in the transmitter circuit to produce a constant-current source square wave, and synchronous solid-state switching and sampling of the potential waveform in the receiver circuit to provide an analog dc voltage proportions to the measured resistivity. Technical specifications and design details are included in this report.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Tyson, Stephen
2015-04-01
Fracture density and orientation are key parameters controlling productivity of coal seam gas reservoirs. Seismic anisotropy can help to identify and quantify fracture characteristics. In particular, wide offset and dense azimuthal coverage land seismic recordings offers the opportunity for recovery of anisotropy parameters. In many coal seam gas reservoirs (eg. Walloon Subgroup in the Surat Basin, Queensland, Australia (Esterle et al. 2013)) the thickness of coal-beds and interbeds (e.g mud-stone) are well below the seismic wave length (0.3-1m versus 5-15m). In these situations, the observed seismic anisotropy parameters represent effective elastic properties of the composite media formed of fractured, anisotropic coal and isotropic interbed. As a consequence observed seismic anisotropy cannot directly be linked to fracture characteristics but requires a more careful interpretation. In the paper we will discuss techniques to estimate effective seismic anisotropy parameters from well log data with the objective to improve the interpretation for the case of layered thin coal beds. In the first step we use sonic log data to reconstruct the elasticity parameters as function of depth (at the resolution of the sonic log). It is assumed that within a sample fractures are sparse, of the same size and orientation, penny-shaped and equally spaced. Following classical fracture model this can be modeled as an elastic horizontally transversely isotropic (HTI) media (Schoenberg & Sayers 1995). Under the additional assumption of dry fractures, normal and tangential fracture weakness is estimated from slow and fast shear wave velocities of the sonic log. In the second step we apply Backus-style upscaling to construct effective anisotropy parameters on an appropriate length scale. In order to honor the HTI anisotropy present at each layer we have developed a new extension of the classical Backus averaging for layered isotropic media (Backus 1962) . Our new method assumes layered HTI media with constant anisotropy orientation as recovered in the first step. It leads to an effective horizontal orthorhombic elastic model. From this model Thomsen-style anisotropy parameters are calculated to derive azimuth-dependent normal move out (NMO) velocities (see Grechka & Tsvankin 1998). In our presentation we will show results of our approach from sonic well logs in the Surat Basin to investigate the potential of reconstructing S-wave velocity anisotropy and fracture density from azimuth dependent NMO velocities profiles.
COSOLVENT EFFECTS ON PHENANTHRENE SORPTION-DESORPTION ON A FRESH-WATER SEDIMENT
This study evaluated the effects of the water-miscible cosolvent methanol on the sorption-desorption of phenanthrene by the natural organic matter (NOM) of a fresh-water sediment. A biphasic pattern was observed in the relationship between the log of the carbon-normalized sorpti...
Rosso, Rober; Vieira, Tiago O; Leal, Paulo C; Nunes, Ricardo J; Yunes, Rosendo A; Creczynski-Pasa, Tânia B
2006-09-15
The gallic acid and several n-alkyl gallates, with the same number of hydroxyl substituents, varying only in the side carbonic chain length, with respective lipophilicity defined through the C log P, were studied. It evidenced the structure-activity relationship of the myeloperoxidase activity inhibition and the hypochlorous acid scavenger property, as well as its low toxicity in rat hepatic tissue. The gallates with C log P below 3.0 (compounds 2-7) were more active against the enzyme activity, what means that the addition of 1-6 carbons (C log P between 0.92 and 2.92) at the side chain increased approximately 50% the gallic acid effect. However, a relationship between the HOCl scavenging capability and the lipophilicity was not observed. With these results it is possible to suggest that the gallates protect the HOCl targets through two mechanisms: inhibiting its production by the enzyme and scavenging the reactive specie.
Contribution of waste water treatment plants to pesticide toxicity in agriculture catchments.
Le, Trong Dieu Hien; Scharmüller, Andreas; Kattwinkel, Mira; Kühne, Ralph; Schüürmann, Gerrit; Schäfer, Ralf B
2017-11-01
Pesticide residues are frequently found in water bodies and may threaten freshwater ecosystems and biodiversity. In addition to runoff or leaching from treated agricultural fields, pesticides may enter streams via effluents from wastewater treatment plants (WWTPs). We compared the pesticide toxicity in terms of log maximum Toxic Unit (log mTU) of sampling sites in small agricultural streams of Germany with and without WWTPs in the upstream catchments. We found an approximately half log unit higher pesticide toxicity for sampling sites with WWTPs (p < 0.001). Compared to fungicides and insecticides, herbicides contributed most to the total pesticide toxicity in streams with WWTPs. A few compounds (diuron, terbuthylazin, isoproturon, terbutryn and Metazachlor) dominated the herbicide toxicity. Pesticide toxicity was not correlated with upstream distance to WWTP (Spearman's rank correlation, rho = - 0.11, p > 0.05) suggesting that other context variables are more important to explain WWTP-driven pesticide toxicity. Our results suggest that WWTPs contribute to pesticide toxicity in German streams. Copyright © 2017 Elsevier Inc. All rights reserved.
Gandhi, Megha; Matthews, Karl R
2003-11-01
The efficacy of a 20,000 ppm calcium hypochlorite treatment of alfalfa seeds artificially contaminated with Salmonella was studied. Salmonella populations reached >7.0 log on sprouts grown from seeds artificially contaminated with Salmonella and then treated with 20,000 ppm Ca(OCl)(2). The efficacy of spray application of chlorine (100 ppm) to eliminate Salmonella during germination and growth of alfalfa was assessed. Alfalfa seed artificially contaminated with Salmonella was treated at germination, on day 2 or day 4, or for the duration of the growth period. Spray application of 100 ppm chlorine at germination, day 2, or day 4 of growth was minimally effective resulting in approximately a 0.5-log decrease in population of Salmonella. Treatment on each of the 4 days of growth reduced populations of Salmonella by only 1.5 log. Combined treatment of seeds with 20,000 ppm Ca(OCl)(2) and followed by 100 ppm chlorine or calcinated calcium during germination and sprout growth did not eliminate Salmonella.
Unsplittable Flow in Paths and Trees and Column-Restricted Packing Integer Programs
NASA Astrophysics Data System (ADS)
Chekuri, Chandra; Ene, Alina; Korula, Nitish
We consider the unsplittable flow problem (UFP) and the closely related column-restricted packing integer programs (CPIPs). In UFP we are given an edge-capacitated graph G = (V,E) and k request pairs R 1, ..., R k , where each R i consists of a source-destination pair (s i ,t i ), a demand d i and a weight w i . The goal is to find a maximum weight subset of requests that can be routed unsplittably in G. Most previous work on UFP has focused on the no-bottleneck case in which the maximum demand of the requests is at most the smallest edge capacity. Inspired by the recent work of Bansal et al. [3] on UFP on a path without the above assumption, we consider UFP on paths as well as trees. We give a simple O(logn) approximation for UFP on trees when all weights are identical; this yields an O(log2 n) approximation for the weighted case. These are the first non-trivial approximations for UFP on trees. We develop an LP relaxation for UFP on paths that has an integrality gap of O(log2 n); previously there was no relaxation with o(n) gap. We also consider UFP in general graphs and CPIPs without the no-bottleneck assumption and obtain new and useful results.
Time-dependent disk accretion in X-ray Nova MUSCAE 1991
NASA Astrophysics Data System (ADS)
Mineshige, Shin; Hirano, Akira; Kitamoto, Shunji; Yamada, Tatsuya T.; Fukue, Jun
1994-05-01
We propose a new model for X-ray spectral fitting of binary black hole candidates. In this model, it is assumed that X-ray spectra are composed of a Comptonized blackbody (hard component) and a disk blackbody spectra (soft component), in which the temperature gradient of the disk, q identically equal to -d log T/d log r, is left as a fitting parameter. With this model, we have fitted X-ray spectra of X-ray Nova Muscae 1991 obtained by Ginga. The fitting shows that a hot cloud, which Compton up-scatters soft photons from the disk, gradually shrank and became transparent after the main peak. The temperature gradient turns out to be fairly constant and is q approximately 0.75, the value expected for a Newtonian disk model. To reproduce this value with a relativistic disk model, a small inclination angle, i approximately equal to 0 deg to 15 deg, is required. It seems, however, that the q-value temporarily decreased below 0.75 at the main flare, and q increased in a transient fashion at the second peak (or the reflare) occurring approximately 70 days after the main peak. Although statistics are poor, these results, if real, would indicate that the disk brightening responsible for the main and secondary peaks are initiated in the relatively inner portions of the disk.
The application of the sinusoidal model to lung cancer patient respiratory motion
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, R.; Vedam, S.S.; Chung, T.D.
2005-09-15
Accurate modeling of the respiratory cycle is important to account for the effect of organ motion on dose calculation for lung cancer patients. The aim of this study is to evaluate the accuracy of a respiratory model for lung cancer patients. Lujan et al. [Med. Phys. 26(5), 715-720 (1999)] proposed a model, which became widely used, to describe organ motion due to respiration. This model assumes that the parameters do not vary between and within breathing cycles. In this study, first, the correlation of respiratory motion traces with the model f(t) as a function of the parameter n(n=1,2,3) was undertakenmore » for each breathing cycle from 331 four-minute respiratory traces acquired from 24 lung cancer patients using three breathing types: free breathing, audio instruction, and audio-visual biofeedback. Because cos{sup 2} and cos{sup 4} had similar correlation coefficients, and cos{sup 2} and cos{sup 1} have a trigonometric relationship, for simplicity, the cos{sup 1} value was consequently used for further analysis in which the variations in mean position (z{sub 0}), amplitude of motion (b) and period ({tau}) with and without biofeedback or instructions were investigated. For all breathing types, the parameter values, mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) exhibited significant cycle-to-cycle variations. Audio-visual biofeedback showed the least variations for all three parameters (z{sub 0}, b, and {tau}). It was found that mean position (z{sub 0}) could be approximated with a normal distribution, and the amplitude of motion (b) and period ({tau}) could be approximated with log normal distributions. The overall probability density function (pdf) of f(t) for each of the three breathing types was fitted with three models: normal, bimodal, and the pdf of a simple harmonic oscillator. It was found that the normal and the bimodal models represented the overall respiratory motion pdfs with correlation values from 0.95 to 0.99, whereas the range of the simple harmonic oscillator pdf correlation values was 0.71 to 0.81. This study demonstrates that the pdfs of mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) can be used for sampling to obtain more realistic respiratory traces. The overall standard deviations of respiratory motion were 0.48, 0.57, and 0.55 cm for free breathing, audio instruction, and audio-visual biofeedback, respectively.« less
NASA Astrophysics Data System (ADS)
Karacan, C. Özgen; Olea, Ricardo A.
2014-06-01
Prediction of potential methane emission pathways from various sources into active mine workings or sealed gobs from longwall overburden is important for controlling methane and for improving mining safety. The aim of this paper is to infer strata separation intervals and thus gas emission pathways from standard well log data. The proposed technique was applied to well logs acquired through the Mary Lee/Blue Creek coal seam of the Upper Pottsville Formation in the Black Warrior Basin, Alabama, using well logs from a series of boreholes aligned along a nearly linear profile. For this purpose, continuous wavelet transform (CWT) of digitized gamma well logs was performed by using Mexican hat and Morlet, as the mother wavelets, to identify potential discontinuities in the signal. Pointwise Hölder exponents (PHE) of gamma logs were also computed using the generalized quadratic variations (GQV) method to identify the location and strength of singularities of well log signals as a complementary analysis. PHEs and wavelet coefficients were analyzed to find the locations of singularities along the logs. Using the well logs in this study, locations of predicted singularities were used as indicators in single normal equation simulation (SNESIM) to generate equi-probable realizations of potential strata separation intervals. Horizontal and vertical variograms of realizations were then analyzed and compared with those of indicator data and training image (TI) data using the Kruskal-Wallis test. A sum of squared differences was employed to select the most probable realization representing the locations of potential strata separations and methane flow paths. Results indicated that singularities located in well log signals reliably correlated with strata transitions or discontinuities within the strata. Geostatistical simulation of these discontinuities provided information about the location and extents of the continuous channels that may form during mining. If there is a gas source within their zone of influence, paths may develop and allow methane movement towards sealed or active gobs under pressure differentials. Knowledge gained from this research will better prepare mine operations for potential methane inflows, thus improving mine safety.
Liquid egg white pasteurization using a centrifugal UV irradiator.
Geveke, David J; Torres, Daniel
2013-03-01
Studies are limited on UV nonthermal pasteurization of liquid egg white (LEW). The objective of this study was to inactivate Escherichia coli using a UV irradiator that centrifugally formed a thin film of LEW on the inside of a rotating cylinder. The LEW was inoculated with E. coli K12 to approximately 8 log cfu/ml and was processed at the following conditions: UV intensity 1.5 to 9.0 mW/cm²; cylinder rotational speed 450 to 750 RPM, cylinder inclination angle 15° to 45°, and flow rate 300 to 900 ml/min, and treatment time 1.1 to 3.2s. Appropriate dilutions of the samples were pourplated with tryptic soy agar (TSA). Sublethal injury was determined using TSA+4% NaCl. The regrowth of surviving E. coli during refrigerated storage for 28 days was investigated. The electrical energy of the UV process was also determined. The results demonstrated that UV processing of LEW at a dose of 29 mJ/cm² at 10°C reduced E. coli by 5 log cfu/ml. Inactivation significantly increased with increasing UV dose and decreasing flow rate. The results at cylinder inclination angles of 30° and 45° were similar and were significantly better than those at 15°. The cylinder rotational speed had no significant effect on inactivation. The occurrence of sublethal injury was detected. Storage of UV processed LEW at 4° and 10°C for 21 days further reduced the population of E. coli to approximately 1 log cfu/ml where it remained for an additional 7 days. The UV energy applied to the LEW to obtain a 5 log reduction of E. coli was 3.9 J/ml. These results suggest that LEW may be efficiently pasteurized, albeit at low flow rates, using a nonthermal UV device that centrifugally forms a thin film. Published by Elsevier B.V.
Molva, Celenk; Baysal, Ayse Handan
2015-05-04
The present study examined the growth characteristics of Alicyclobacillus acidoterrestris DSM 3922 vegetative cells and spores after inoculation into apple, pomegranate and pomegranate-apple blend juices (10, 20, 40 and 80%, v/v). Also, the effect of sporulation medium was tested using mineral [Bacillus acidoterrestris agar (BATA) and Bacillus acidocaldarius agar (BAA)] and non-mineral containing media [potato dextrose agar (PDA) and malt extract agar (MEA)]. The juice samples were inoculated separately with approximately 10(5)CFU/mL cells or spores from different sporulation media and then incubated at 37°C for 336 h. The number of cells decreased significantly with increasing pomegranate juice concentration in the blend juices and storage time (p<0.001). Based on the results, 3.17, 3.53, and 3.72 log cell reductions were observed in 40%, 80% blend and pomegranate juices, respectively while the cell counts attained approximately 7.17 log CFU/mL in apple juice after 336 h. On the other hand, the cell growth was inhibited for a certain time, and then the numbers started to increase after 72 and 144 h in 10% and 20% blend juices, respectively. After 336 h, total population among spores produced on PDA, BATA, BAA and MEA indicated 1.49, 1.65, 1.67, and 1.28 log reductions in pomegranate juice; and 1.51, 1.38, 1.40 and 1.16 log reductions in 80% blend juice, respectively. The inhibitory effects of 10%, 20% and 40% blend juices varied depending on the sporulation media used. The results obtained in this study suggested that pomegranate and pomegranate-apple blend juices could inhibit the growth of A. acidoterrestris DSM 3922 vegetative cells and spores. Copyright © 2015 Elsevier B.V. All rights reserved.
On the distribution of scaling hydraulic parameters in a spatially anisotropic banana field
NASA Astrophysics Data System (ADS)
Regalado, Carlos M.
2005-06-01
When modeling soil hydraulic properties at field scale it is desirable to approximate the variability in a given area by means of some scaling transformations which relate spatially variable local hydraulic properties to global reference characteristics. Seventy soil cores were sampled within a drip irrigated banana plantation greenhouse on a 14×5 array of 2.5 m×5 m rectangles at 15 cm depth, to represent the field scale variability of flow related properties. Saturated hydraulic conductivity and water retention characteristics were measured in these 70 soil cores. van Genuchten water retention curves (WRC) with optimized m ( m≠1-1/ n) were fitted to the WR data and a general Mualem-van Genuchten model was used to predict hydraulic conductivity functions for each soil core. A scaling law, of the form ν=ανi*, was fitted to soil hydraulic data, such that the original hydraulic parameters νi were scaled down to a reference curve with parameters νi*. An analytical expression, in terms of Beta functions, for the average suction value, hc, necessary to apply the above scaling method, was obtained. A robust optimization procedure with fast convergence to the global minimum is used to find the optimum hc, such that dispersion is minimized in the scaled data set. Via the Box-Cox transformation P(τ)=(αiτ-1)/τ, Box-Cox normality plots showed that scaling factors for the suction ( αh) and hydraulic conductivity ( αk) were approximately log-normally distributed (i.e. τ=0), as it would be expected for such dynamic properties involving flow. By contrast static soil related properties as αθ were found closely Gaussian, although a power τ=3/4 was best for approaching normality. Application of four different normality tests (Anderson-Darling, Shapiro-Wilk, Kolmogorov-Smirnov and χ2 goodness-of-fit tests) rendered some contradictory results among them, thus suggesting that this widely extended practice is not recommended for providing a suitable probability density function for the scaling parameters, αi. Some indications for the origin of these disagreements, in terms of population size and test constraints, are pointed out. Visual inspection of normal probability plots can also lead to erroneous results. The scaling parameters αθ and αK show a sinusoidal spatial variation coincident with the underlying alignment of banana plants on the field. Such anisotropic distribution is explained in terms of porosity variations due to processes promoting soil degradation as surface desiccation and soil compaction, induced by tillage and localized irrigation of banana plants, and it is quantified by means of cross-correlograms.